A simple explanation of AI hallucination and why it matters.
AI hallucination is when an AI system produces information that sounds convincing but isn’t actually correct. It might invent facts, misquote sources, or confidently give an answer that has no real basis. The key issue isn’t just that it’s wrong, it’s that it doesn’t signal uncertainty.
For example, an AI tool might generate a summary of an article that includes details that were never there, or provide a reference that doesn’t exist. This happens because AI models are designed to predict likely responses, not verify the truth. They focus on what sounds plausible rather than what is accurate.
Most of the time, this isn’t obvious. The response reads well, feels complete, and fits the context, which is where it starts to matter. If information looks reliable but isn’t, it becomes harder to know when to trust it and when to question it.
So the question becomes: How do we spot when something sounds right, but isn’t?