How it worksSell SkillsEarly access
Sign inGet started free
HomeGlossaryHallucination
AI Glossary

What is Hallucination?

When an AI model generates factually incorrect information with apparent confidence, "making up" details that aren't true.

Hallucination explained

AI hallucination is one of the biggest challenges in deploying language models. Because LLMs generate text by predicting likely next tokens, they can produce plausible-sounding but entirely fabricated facts, citations, names, or data. Good prompt engineering — including asking the AI to say "I don't know" when uncertain, providing source documents via RAG, and structuring prompts to reduce ambiguity — significantly reduces hallucination.

Frequently asked questions

What is AI hallucination?
AI hallucination is when an AI model confidently generates false or fabricated information — inventing facts, citations, or data that don't exist.
How do I reduce AI hallucinations?
Use specific prompts that ask the AI to acknowledge uncertainty, provide reference documents for the AI to work from (RAG), verify outputs against authoritative sources, and use AI skills designed to minimize hallucination.
Browse ⚙️ engineering AI skills on Geni Kart

Find expert-crafted hallucination prompts and skill packs, ready to use in ChatGPT, Claude, or Gemini.

Browse engineering skills →

Related terms

Prompt Engineering
The practice of designing and refining text inputs (prompts) to get the best possible outp
LLM
Large Language Model — a type of AI trained on vast amounts of text to understand and gene
RAG
Retrieval-Augmented Generation — a technique that enhances AI responses by retrieving rele