In Depth

Hallucinations occur because LLMs are trained to produce fluent, plausible token sequences rather than verified facts. Common forms include invented citations, incorrect dates, fabricated statistics, and nonexistent people or products. Mitigation strategies include RAG, grounding with external tools, chain-of-thought prompting, and confidence calibration. Hallucination rates are a key metric when evaluating LLMs for high-stakes applications.