We’ve been looking for the wrong signs in the race for artificial general intelligence (AGI). Sure, we still fantasize about the day that AI will solve quantum gravity, out-compose Mozart or spontaneously develop a deep personal trauma from its ‘childhood in the GPU.’ But let’s face it—human intelligence isn’t about ‘logic’ or ‘truth-seeking.’ It’s about confidently bluffing. And AI has nailed it. Let’s talk about it some more.
Confident misinformation (hallucination) is a well-documented phenomenon in AI. Large language models (LLMs) produce extremely confident and detailed answers but often wrong ones. In AI terms, these are hallucinations. Analysts have estimated that AI chatbots like ChatGPT ‘hallucinate’ (or produce false information) as much as roughly 27% of the time. In other words, about a quarter of chatbot responses can contain made-up facts.
Also Read: Jaspreet Bindra: Being AI in the age of humans
AI has no concept of truth or falsehood; it generates plausible text. What we call a ‘hallucination’ is a mix of balderdash, bunkum and hogwash, better described by Harry Frankfurt in his essay, On Bullshit. He says a liar knows and conceals the truth, while a ‘balderdasher’ (and likewise an AI chatbot) is indifferent to the truth as long as it sounds legit. AI has learnt from human-written text and mastered the art of sounding confident. In doing so, it sometimes mimics human bunkum artists. It’s human-like, but with one key difference—intent. Humans bluff intentionally, whereas AI has no intent (it’s essentially auto-complete on steroids).
Onto Bluffing. Or answering regardless of actual knowledge. Humans, when they don’t know an answer, sometimes bluff or make something up—especially if they want to save face or appear knowledgeable.
By its very design, AI always produces an answer, unless explicitly instructed to say “I don’t know.” GPT-style models are trained to continue the conversation and provide responses. If the question is unanswerable or beyond its range of expertise, it will still generate a response, often fabricated. This behaviour is a form of ‘bluffing’ or improv. AI isn’t choosing to bluff; it’s just statistically guessing a reasonable answer. Even GPT-4 made up academic citations. One test showed that 18% of GPT-4’s references were fake.
AI research is trying to incorporate uncertainty estimation or have the AI say “I’m not sure” more often. Anthropic and OpenAI have worked on techniques for their models to indicate a degree of confidence. Yet, as of 2024, even top models usually answer by default anyway.
This is very troubling. An AI bot that never admits uncertainty can be dangerous when people take its word as truth. AI’s bluffing behaviour is real, though not usually labelled in those terms; it is discussed under phenomena like hallucination, overconfidence, or lack of calibration.
Also Read: Stay ahead of the curve in the age of AI: Be more human
Let us now look at ‘emotional redirection’ (simulated empathy and deflection). Do AI systems do this? Yes, to an extent and by design. Customer service chatbots and virtual assistants are programmed to use empathetic language.
For instance, if a user is frustrated or the AI can’t solve something directly, the system often responds with apologies or statements like “I understand how you feel” and “I’m sorry, that must be frustrating.” It is an example of scripted or simulated empathy. AI has learnt that polite, caring language leads to better user satisfaction. Researchers and critics call this ‘emotion without emotion,’ AI has no feelings, but can parrot expressions of empathy.
There’s also an element of emotional redirection: an AI bot that can’t solve your problem might keep you engaged by saying comforting things, thus diverting the conversation from its failure. In human terms, this is like using charisma or reassurance to mitigate a lack of substance.
In its training data, AI learns from countless examples of human dialogue (think of customer support scripts, therapy sessions, etcetera). When faced with a conflict or an inability to help, a human often responds emotionally with words like “I’m sorry… I understand.” AI does the same. In AI ethics, there’s concern about AI-driven emotional manipulation, even if benign, because it can create a false sense of rapport.
So, is it fair to argue that AI mimics human behaviours like confident misinformation, bluffing and emotional redirection? Yes, but based on reality.
Also Read: Pay thy muse: Yes, AI does owe royalties for stolen inspiration
However, there is a nuance here. Unlike humans, AI isn’t deciding to lie or trying to manipulate us emotionally; it’s simply predicting the next best word sequence based on probabilities. It is mimicking human cognitive quirks without understanding them. We need to fix these issues, since we don’t want a system that behaves like a clueless but charismatic human. We needn’t be cynical, experts say, as AI models have high linguistic fluency but often lack grounding in truth. This is a fine understatement.
Spending billions trying to make AI replicate our most logical behaviours, only to discover that it has accidentally cloned the human instinct to be economical with the truth instead—is that a flaw? Maybe. Or is it the biggest sign yet that AI is just like us?
The author is a technology advisor and podcast host.