I’ll be honest: I use AI a lot.
Brainstorming, research, summarizing documents, drafting text — there are days when I’ve spent more time talking to an AI than thinking through problems on my own. It felt like pure upside. A tool that saves time and extends what I can do.
Then I came across a 2025 paper from MIT Media Lab: “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.”
It made me pause.
- 1 The Study: 54 People, 3 Groups, 4 Months of EEG Data
- 2 Finding #1: Brain Activity Dropped Systematically with AI Use
- 3 Finding #2: LLM Users Couldn’t Remember What They’d Just Written
- 4 Finding #3: “Did I Even Write This?”
- 5 The Swap Session: What Happened When Groups Switched
- 6 What Is “Cognitive Debt”?
- 7 How I’m Thinking About My Own AI Use
- 8 Books to Go Deeper
The Study: 54 People, 3 Groups, 4 Months of EEG Data
The setup was straightforward. 54 participants were split into three groups and asked to write essays:
- LLM group: Used ChatGPT
- Search Engine group: Used Google and other web search tools
- Brain-only group: No tools at all — just their own thinking
This was repeated across three sessions. In a fourth session, the groups swapped: the LLM group had to write without tools, and the Brain-only group got to use ChatGPT for the first time. Throughout all of this, participants wore EEG headsets that continuously measured brain activity.
A longitudinal study, run over four months.
Finding #1: Brain Activity Dropped Systematically with AI Use
The EEG results were striking.
Brain connectivity consistently scaled down with the level of external support: Brain-only > Search Engine > LLM.
The LLM group showed up to 55% lower total brain network activity compared to the Brain-only group. The more help people had, the less their brains engaged with the task.
The Search Engine group landed in the middle. Scanning pages, evaluating results, and synthesizing information still activated the visual cortex and frontal executive regions. The LLM group didn’t show comparable activation — the cognitive mode had shifted toward passively receiving and transcribing, rather than actively constructing.
Finding #2: LLM Users Couldn’t Remember What They’d Just Written
After each essay-writing session, participants were asked: “Can you quote something from the essay you just wrote?”
In Session 1, 83% of LLM group participants (15 out of 18) reported difficulty quoting from their own essay. Not a single person could quote correctly.
By Session 2, nearly all Brain-only and Search Engine participants could quote accurately. The LLM group’s impairment persisted across sessions, though it gradually improved.
The neural explanation: using an LLM bypassed the memory encoding process. Participants read, selected, and pasted AI-generated content — but they never internally processed it. The brain never stored it. The reduced activity in theta and alpha bands, which are directly associated with episodic memory and semantic encoding, reflects exactly this.
Finding #3: “Did I Even Write This?”
A subtler but important finding: the sense of ownership over the written work differed sharply between groups.
Almost all Brain-only participants felt the essay was fully theirs. The LLM group was fragmented — some claimed full ownership, others explicitly denied it, and many assigned themselves partial credit: “I’d say this is about 60% mine.”
This maps onto the reduced activity in anterior frontal regions responsible for self-monitoring and metacognition in the LLM group. When content generation is outsourced to an external system, the brain’s own authorship loop gets short-circuited.
The Swap Session: What Happened When Groups Switched
The fourth session — where groups switched tools — may have produced the most important finding of all.
When LLM-dependent participants had to write without any tools, their brain activity did not recover to the level of the Brain-only group’s very first session. The baseline had shifted. The neural patterns associated with internal content generation were attenuated — and didn’t bounce back easily.
The reverse pattern was more encouraging. When Brain-only participants used ChatGPT for the first time, they showed higher memory recall and more active information-seeking behavior — engaging with the tool rather than being driven by it. Their brain activity during LLM use resembled the Search Engine group more than the habitual LLM group.
In short: people who built strong thinking habits first were able to use AI as a tool. People who started with AI dependency found it hard to think without it.
What Is “Cognitive Debt”?
The researchers use the term “cognitive debt” to describe what accumulates when we repeatedly offload thinking to AI.
In the short term, AI reduces cognitive load — things get done faster with less effort. But that friction, those mental struggle points, are also where learning and memory consolidation happen. Skip them consistently, and the cognitive capacity that would have been built doesn’t develop. The debt accumulates quietly.
The researchers concluded with a note of caution:
“We believe that longitudinal studies are needed in order to understand the long-term impact of LLMs on the human brain, before LLMs are recognized as something that is net positive for humans.”
— Kosmyna et al., MIT Media Lab, 2025
How I’m Thinking About My Own AI Use
I’m not going to stop using AI. But this research changed something in how I think about the order of operations.
The key insight from the swap session: people who developed strong independent thinking first could use AI without surrendering their cognitive agency. The tool enhanced their process. The people who started with AI dependency couldn’t get that back easily.
So the principle I’m trying to follow: think first, then use AI. Draft the idea yourself, then ask AI to challenge it. Write the paragraph yourself, then ask AI to improve it. Form an opinion first, then check it against what the AI says.
I’m spending time in a business school learning data science partly because I want to understand AI — to be someone who uses these tools deliberately, not someone whose thinking gradually gets replaced by them. This research puts that instinct on a firmer empirical footing.
Convenience is real. So is the cost.
Books to Go Deeper
① On What Technology Is Doing to Our Brains
The Shallows: What the Internet Is Doing to Our Brains — Nicholas Carr (W. W. Norton)
A Pulitzer Prize finalist that argues the internet — by design — rewires us for distraction and shallow processing. Written before the LLM era, but more relevant now than ever. Carr’s core argument is that tools don’t just help us think — they change how we think. A rigorous, readable treatment of cognitive offloading and what we lose when we stop doing hard mental work.
② On Protecting Deep Thinking in a Distracted World
Deep Work: Rules for Focused Success in a Distracted World — Cal Newport (Grand Central Publishing)
Newport’s argument: the ability to concentrate deeply on cognitively demanding tasks is becoming rare — and increasingly valuable. His framework for protecting focused work time is directly applicable to the cognitive debt problem. If AI is gradually eroding our capacity for sustained independent thinking, Deep Work is the antidote — a practical guide to keeping that capacity sharp.

