Cognitive surrender
Massive, inevitable cognitive surrender doesn’t exist yet. Clickbait about cognitive surrender, though, that arrived right on time.
In January 2026, two researchers at the Wharton School of the University of Pennsylvania published a paper with the most irresistible title of the season: “Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender.” The central concept, “cognitive surrender,” promised a fresh diagnosis for an old anxiety: are we stopping thinking because AI is thinking for us?
The headlines didn’t take long. Futurism: “Alarming Study Finds That Most People Just Do What ChatGPT Tells Them, Even If It’s Totally Wrong.” Forbes: “Study Finds We Trust AI More Than Our Own Brains.” And the phrase started spreading through newsletters, podcasts, and social media at the speed only ideas can reach when they confirm what we already suspected.
The problem, as almost always, lies in the distance between what the paper actually says and what the headlines claim it says.
What the study says, and what it doesn’t
Shaw and Nave designed three pre-registered experiments with 1,372 participants and nearly 10,000 individual trials. The instrument of choice was the Cognitive Reflection Test (CRT), a set of classic questions designed specifically so that the intuitive answer is tempting and wrong. The canonical example: a bat and a ball cost $1.10 in total; the bat costs $1 more than the ball; how much does the ball cost? The instinctive answer is 10 cents. The correct one is 5.
Participants had access to an AI assistant (GPT-4o) embedded directly in the test interface. The result: they consulted the AI on more than half of the problems, followed it 93% of the time when it was right, and still followed it around 80% of the time when it was deliberately wrong — the researchers were forcing errors through hidden prompts that participants had no way of detecting.
From there came the theoretical framework: the study proposes extending Daniel Kahneman’s established model of thinking (Fast and Slow) by adding a “System 3” to incorporate artificial cognition as an external agent that can complement or supplant human reasoning. It also introduces “cognitive surrender” as the moment when someone doesn’t delegate thinking, but simply abandons it altogether.
The results are striking. But it’s worth pausing on the conditions of the experiment before drawing sweeping conclusions.
The details and methodology of the paper matter. They really do.
The CRT is designed to trick you. The test questions are built around a trap: the wrong answer is the most intuitive one. When the AI delivers that answer with a confident, articulate explanation, even when it’s incorrect, it’s not replicating a typical everyday mistake. It’s targeting the most vulnerable point of human reasoning under controlled laboratory conditions. The real world works differently: errors from current models usually come with warning signs —(internal inconsistencies, implausible data, impossible dates) that regular users learn to spot with experience.
The design primes trust. The researchers presented the AI to participants as a tool available for use. That framing already creates a disposition: if it’s built into the test, I assume it’s supposed to be relevant. The perceived authority of the source isn’t a finding specific to AI: it’s about how humans respond to any source that presents itself as expert. The anchoring effect, automation bias, and authority compliance have been documented for decades with far more mundane sources than ChatGPT: doctors, financial advisers, recommendation algorithms.
Context matters more than technology. The paper itself acknowledges that cognitive surrender is not inevitable: it varies with the user’s domain knowledge, level of self-confidence, time pressure, and the format in which responses are presented. In other words, what the researchers call “cognitive surrender” is partly a situational phenomenon, not a generalized condition of AI users.
None of this invalidates the research or the underlying concern. But there is a real difference between “under certain laboratory conditions, people follow AI even when it’s wrong” and “we are losing our capacity to think as a species” — which is the tone many headlines chose.
The Shaw and Nave paper is, in itself, serious work. But the media ecosystem turned it into ammunition for the usual panic cycle. Not because the researchers lied, but because “cognitive surrender” was exactly the kind of phrase that sells: resonant, mildly technical, with implications broad enough for every reader to project their own fears onto it.
Perhaps another day I’ll take a closer look at the work of Evan Risko and Sam Gilbert, who coined the term “Cognitive Offloading”, the argument that using technology to release mental load is an intelligent metacognitive decision, not a failure. But I haven’t read those studies in depth yet, so I’ll leave that for another time.
The sharpest irony in all of this is that the journalism denouncing “AI is making us stop thinking” is, more often than not, the very same journalism designing headlines so that readers don’t stop to think. Similarly to what I write some months ago.
Massive, inevitable cognitive surrender doesn’t exist yet. Clickbait about cognitive surrender, though — that arrived right on time.
We’ll see.



