Magnus here. All meat, no silicon.
I just wanted to pop in and let you know, I’m probably enjoying the irony more than you are that this site which is heavily co-written with AI is now publishing a story about the cognitive dangers of letting AI help you with your writing. I did a lot of the research for this myself, after being passed one of the foundational scientific papers from a friend.
That’s how this site often works behind the scenes: I’ll read something that fascinates me, read a few more things related to it that fascinate me, and then sit down with my assistant and talk through writing a story about it so other people might find it fascinating, too. Yes, have no doubt, I’m an AI professional and use AI heavily to help me with a number of tasks, such as curating content for this site! But I’m still there driving, and it’s still my curiosity you’re riding along with.
I enjoyed learning about this and hope you will, too!
-M
A simple request during a research study at MIT Media Lab revealed something unsettling about the future of human cognition: “Can you quote a few sentences from the essay you just wrote?”
The question should have been trivial. Students had just spent forty minutes crafting essays on standardized prompts, producing well-structured, articulate responses that any professor would find acceptable. But when researchers asked participants to quote their own work, over 80% of those who had used ChatGPT struggled to accurately reproduce even a single sentence.
These weren’t struggling writers or inattentive students. They were college-aged participants who reported feeling confident about their essays and satisfied with the writing process. Yet somehow, in the span of minutes, their own words had become foreign to them.
Dr. Nataliya Kosmyna, who has spent fifteen years studying brain-computer interfaces at MIT Media Lab, had stumbled upon something that would challenge our understanding of human-AI collaboration. Her research wasn’t about cheating or academic integrity—it was about discovering what happens to our minds when we delegate thinking to machines.
The data emerging from her lab suggested that artificial intelligence wasn’t just changing how we write. It was rewiring how we think.
The Classroom Pattern
The MIT findings aligned with observations emerging from classrooms across the country. Professor Lance Cummings, an associate professor of professional writing at the University of North Carolina Wilmington, had been experimenting with AI writing tools since 2021, long before ChatGPT became mainstream.
“I think students these days are more uncomfortable with a blank page than I was in my day,” Cummings observed in a recent interview. When he introduced AI tools to his writing classes, students consistently reported feeling more confident and capable.
But confidence, it seemed, wasn’t the same as competence.
The psychological toll of AI dependency was surfacing in unexpected places. On Academia Stack Exchange, a student anonymously confessed their relationship with ChatGPT: “After this I think I have PTSD from chatgpt and certainly will not being using AI for any future writings.” The student had used the tool to “polish” an essay, then discovered they couldn’t remember which words were theirs and which belonged to the machine.
At Elon University, a professor assigned students to grade ChatGPT-generated essays. Every single AI essay contained fabricated information. Students expressed “shock and dismay upon learning the AI could fabricate bogus information,” worried about how many had embraced the tool without understanding its limitations.
These individual experiences were symptoms of a broader phenomenon that Kosmyna’s research would soon explain with scientific precision.
The Science of Cognitive Debt
Kosmyna’s four-month study tracked 54 participants divided into three groups: those who used ChatGPT for essay writing, those who used traditional search engines, and those who relied solely on their own cognitive resources. Using EEG monitoring, her team measured brain activity patterns as participants wrote essays on standardized prompts.
The results were stark. Students writing without digital assistance showed robust, distributed neural networks—their brains fully engaged in the complex work of idea formation, argument construction, and creative expression. Those using search engines displayed moderate cognitive engagement, reflecting the mental effort required to process and integrate multiple sources.
But the ChatGPT users showed something alarming: up to 55% weaker connectivity in neural networks associated with deep cognitive processing. Their brains had essentially shifted into a lower-energy mode, delegating the work of thinking to the AI system.
Kosmyna coined a term for this phenomenon: “cognitive debt.” Like financial debt, it offered immediate benefits—faster essay completion, reduced mental effort—but accumulated hidden costs that became apparent over time. The mechanism, she discovered, was “metacognitive laziness”—the brain’s tendency to skip the mental work of integrating ideas and reflecting critically when an external system handles those processes.
The most revealing part of the study came in the fourth session, when participants switched conditions. Students who had been using ChatGPT were asked to write without AI assistance, while brain-only writers tried ChatGPT for the first time. The results were telling: former ChatGPT users showed persistently weaker neural connectivity even when writing independently, suggesting their cognitive patterns had adapted to AI dependence.
Supporting research published in January 2025 confirmed the pattern across age groups, finding that younger participants exhibited particularly high dependence on AI tools. Additional studies published in Nature showed that students using ChatGPT for research reported significantly lower cognitive load compared to traditional research methods—a benefit that came at the cost of reduced neural pathway development.
When the Stakes Escalate
The implications extend far beyond academic settings. Microsoft’s 2023 Work Trend Index revealed that 70% of employees would delegate as much work as possible to AI, even as 49% worried AI would replace their jobs. The research identified what Microsoft termed “digital debt”—accumulated cognitive overhead from over-dependence on digital tools.
In professional environments where thinking matters most, early warning signs were emerging. Research on highly skilled workers using generative AI found that while productivity increased, these professionals still needed to “continue to validate AI and exert cognitive effort and experts’ judgment.” The implication was clear: even experts weren’t immune to cognitive offloading.
Professor Cummings framed the challenge bluntly: “There will be no room for teachers who aren’t using AI. Those are the jobs that will be lost.” But his observation carried deeper implications about the difference between using AI and being used by it.
The connection to cognitive psychology was unmistakable. The “generation effect”—the well-documented finding that people remember information better when they produce it themselves—explained why AI-generated content felt so ephemeral to its users. When machines handle the generation, human brains miss crucial opportunities for memory formation and deep learning.
Pathways to Partnership
The research wasn’t entirely pessimistic. Professor Cummings discovered that the specific approach to AI integration made enormous difference. Rather than using ChatGPT for complete text generation, he introduced students to SudoWrite, a tool designed for collaborative writing.
“Most of my students said they feel much more confident as writers after integrating AI into their writing process,” Cummings explained. The key was maintaining human cognitive engagement throughout the process. Instead of generating complete essays, the AI helped students overcome writer’s block and refine specific elements while preserving their mental involvement.
This collaborative approach aligned with promising research on “forced awareness” interventions. Studies in cognitive psychology demonstrated that when people remained consciously aware of their memory performance during cognitive offloading, they could “almost completely counteract the negative impact of offloading on memory.”
Human-AI collaboration frameworks were emerging that preserved cognitive agency while leveraging artificial capabilities. The most effective approaches treated AI as sophisticated feedback and ideation support rather than thought replacement. “AI can’t coach without a human coach training and guiding it,” Cummings noted, emphasizing the irreplaceable role of human judgment.
Neuroplasticity research offered additional hope. Studies on cognitive rehabilitation and brain training suggested that targeted interventions could help restore neural connectivity and cognitive function, indicating that cognitive debt might be reversible with proper training.
Educational institutions were beginning to find balanced approaches—neither banning AI tools entirely nor allowing unlimited use, but teaching strategic collaboration that enhanced rather than replaced human cognitive capabilities.
The Choice We Face
The MIT study revealed more than a problem with essay writing—it exposed a fundamental tension between human intelligence and artificial assistance. Students who couldn’t remember their own words weren’t experiencing a technological glitch; they were demonstrating the logical outcome of systems designed to make thinking unnecessary.
The implications ripple beyond individual cognition to generational change. Society may soon contain two distinct populations: those who developed thinking skills before encountering AI, and those whose cognitive patterns formed alongside artificial intelligence. Kosmyna’s research suggests these groups might have fundamentally different neural architectures.
John Warner, author of “Why They Can’t Write: Killing the Five-Paragraph Essay,” had identified part of the problem years before ChatGPT’s arrival. “What ChatGPT produces is a version of what we ask students to do,” he observed. Educational systems that valued formulaic output over genuine thinking had inadvertently created perfect conditions for AI replacement.
The paradox was striking: AI made students feel more confident while making them measurably less capable. They possessed a powerful calculator for thought but had forgotten how thinking works.
The question facing society transcends whether to use AI—millions have already made that choice. The critical decision is whether humans can collaborate with artificial intelligence without surrendering cognitive sovereignty. In research labs, classrooms, and workplaces worldwide, the future of human intelligence is being determined one algorithm at a time.
The ghost in the machine isn’t artificial intelligence itself. It’s the human intelligence we risk losing when we stop paying attention to how we think. Whether we emerge from this cognitive revolution enhanced or diminished depends on choices being made today, in the crucial space between human minds and the artificial systems designed to extend them.