Question and Answer

What is hallucination (in models like ChatGPT)?

Hallucination is the word used to describe the situation when models like ChatGPT output false information as if it were true. Even though the AI may sound very confident, sometimes the answers it gives are just plain wrong. 

Why does this happen? AI tools like ChatGPT are trained to predict what words should come next in the conversation you are having with it. They are really good at putting together sentences that sound plausible and realistic.

However, these AI models don't understand the meaning behind the words. They lack the logical reasoning to tell if what they are saying actually makes sense or is factually correct. They were never designed to be search engines. Instead they might be thought of as “wordsmiths”—tools for summarizing, outlining, brainstorming, and the like.

So we can't blindly trust that everything they say is accurate, even if it sounds convincing. It's always a good idea to double check important information against other reliable sources.

Here’s a tip: Models that are grounded in an external source of information (like web search results) hallucinate less often. That’s because the model searches for relevant web pages, summarizes the results, and links to the pages that each part of the answer came from. This makes it easier to fact-check the result.

Examples of grounded models are Microsoft Copilot, Perplexity, and ChatGPT Plus (the paid version). 

Learn more

Related FAQs

    Frequently Asked Questions