In this Tech Insight, we look at why new research has shown that asking AI chatbots for short answers can increase the risk of hallucinations, and what this could mean for users and developers alike.

Shortcuts Come At A Cost

AI chatbots may be getting faster, slicker, and more widely deployed by the day, however a new study by Paris-based AI testing firm Giskard has uncovered a counterintuitive flaw, i.e. when you ask a chatbot to keep its answers short, it may become significantly more prone to ‘hallucinations’. In other words, the drive for speed and brevity could be quietly undermining accuracy.

What Are Hallucinations, And Why Do They Happen?

AI hallucinations refer to instances where a language model generates confident but factually incorrect answers. Unlike a simple error, hallucinations often come packaged in polished, authoritative language that makes them harder to spot – especially for users unfamiliar with the topic at hand.

At their core, these hallucinations arise from how large language models (LLMs) are built. They don’t “know” facts in the way humans do. Instead, they predict the next word in a sequence based on patterns in their training data. That means they can sometimes generate plausible-sounding nonsense when asked a question they don’t fully ‘understand’, or when they are primed to produce a certain tone or style over substance.