Hallucination
When AI generates false or inaccurate information that appears realistic
Full Definition
In artificial intelligence, hallucination refers to when an AI system generates information that is factually incorrect, made-up, or nonsensical, but presents it with confidence as if it were true. This phenomenon occurs because AI models, particularly large language models, are designed to produce coherent-sounding responses even when they don't have accurate information about a topic. Hallucinations happen because AI systems learn patterns from training data and generate responses based on statistical relationships rather than true understanding. When faced with unfamiliar topics or asked to provide specific details they weren't trained on, these systems may 'fill in the gaps' with plausible-sounding but incorrect information. This can include fabricated facts, non-existent sources, made-up statistics, or entirely fictional events presented as real. Understanding AI hallucination is crucial for anyone using AI tools professionally, as it highlights the importance of fact-checking AI-generated content and not treating AI responses as authoritative sources without verification.
Examples
A chatbot confidently providing a fake scientific study with realistic-sounding authors and publication details when asked about a medical topic
An AI writing assistant creating fictional historical events or dates when helping with a research paper
A language model generating non-existent book titles and authors when asked for reading recommendations on a specific topic