AI Skills Docs
Reference

Hallucinations & Limitations

AI models can generate convincing but incorrect information. Learn to recognize and verify hallucinations to work reliably with AI.

Core Skill

AI understanding

AI models generate text based on probabilities, not truth. Understanding how AI works and where its limitations lie is essential for critically evaluating output. Those who understand AI prevent errors, recognize hallucinations, and know when additional verification is needed.

Downloads

What are AI hallucinations?

AI hallucinations are a fundamental characteristic of how language models work. The model generates convincing but incorrect or non-existent information. It "makes up" answers that seem plausible, based on the data it was trained on, without actually understanding the facts.

Why does this happen? AI models are essentially prediction machines that calculate the most likely next word based on patterns from billions of texts. It's not a search engine and not a database of facts. It generates new text, every time, based on probabilities rather than truth.

The best AI models generate information that isn't true in approximately 1-2% of answers on average. This sounds small, but with intensive use you'll regularly encounter hallucinations. That's why verification is essential.

Think of AI as a hyper-smart intern

Very capable and fast, but you always need to check the work. AI is trained on general knowledge, not on your specific work processes, brand guidelines, or current company data.

Want to know how different AI models perform regarding hallucinations? Check the Vectara Hallucination Leaderboard for a current overview.

Types of hallucinations

AI hallucinations come in different forms. Recognition is the first step toward safe use.

Factual inaccuracies

Convincing-sounding but fabricated 'facts', statistics, dates, or historical events. For example: wrong founding dates of companies, non-existent laws, or fictional market figures.

Incorrect details

Wrong dates, amounts, names, product features, or technical specifications. For example: an incorrect salary average, wrong product names, or erroneous company names in a sector.

Logical and mathematical errors

Calculation errors or contradictory reasoning within the same output. For example: "5.11 is greater than 5.3" or calculations that don't match the given input.

Fabricated sources and references

Non-existent studies, articles, experts, or quotes. The model can reference a "2023 McKinsey study" that was never published, or an expert who doesn't exist.

Context misinterpretation

Answers that contain correct elements but don't fit the specific question or context. The model doesn't understand the nuance of your question and gives a generic answer.

Recognizing and verifying: The VAC check

Develop a verification mindset. Every AI suggestion is a starting point, not an end result. This discipline makes you a better professional because you're forced to understand what the content actually says instead of blindly accepting it.

Use the VAC check to systematically recognize hallucinations:

Check Question
Verifiable Are the facts, figures, dates, and names checkable via reliable sources?
Accurate Are the details correct? Are there no calculation errors or incorrect specifications?
Consistent Does it align with your existing knowledge and expertise? Would a colleague in your field formulate or conclude this the same way?

Pro tip: Use NotebookLM for research

For research where you need to be certain about sources, use NotebookLM instead of regular AI chat. NotebookLM works 100% source-based and provides precise citations with page references. Ideal for market research, competitive analysis, or policy research where source citation is crucial.

Practical verification tips

  • Always verify numbers against reliable sources such as official statistics, annual reports, or government data
  • Verify names and dates via LinkedIn, company websites, or news sources
  • Test against expertise: If something sounds strange, ask a colleague or use your own domain knowledge
  • Ask for sources: Ask AI explicitly to name sources, and check if they actually exist
  • Compare with internal guidelines: Check if tone of voice, style, and content match your brand guidelines

Practical examples

Hallucinations aren't theoretical: you'll encounter them regularly. Below are some examples to help you recognize what they look like.

Inconsistent training data

The same question asked to three different AI models can produce three completely different answers. This is because each model is trained on different datasets and has different priorities.

Example: Ask ChatGPT, Gemini, and Copilot for the founding date of a company. You may get three different years back. Always verify with the official source!

Clock reading is difficult

AI models struggle with reading clocks. This is because they're primarily trained on clock images from web shops, where the hands are usually set to 10:10 or symmetrical times that look visually attractive. Due to this imbalance in training data, models get confused with other times.

Thinking models self-correct

Modern "thinking models" can analyze their own thought process and correct errors. These models show their reasoning and can self-correct when they discover a mistake. This makes them more reliable, but verification remains essential.

Thinking models

Models like Claude with "extended thinking" or OpenAI o1 show their thought process. This helps you understand how they arrive at an answer and where potential errors lie.

Summary

  • 1. AI hallucinates: It's not a bug, it's how language models work. Expect it and plan for it.
  • 2. Verification is essential: Use the VAC check (Verifiable, Accurate, Consistent) for every important output.
  • 3. Know the types: From fabricated facts to incorrect details: know what to look for.
  • 4. Use the right tool: NotebookLM for source-based research, regular AI for creative tasks.