The Helicopter Problem

Mar 15, 2026

Learning & Change

By Kavi Arasu

Cognitive debt in AI is real.

What a mathematician’s observation about AI tells us about the future of leadership judgement

Terence Tao — widely regarded as the world’s greatest living mathematician — recently described a concern about using AI to solve hard problems. His framing was simple enough to fit on a napkin and sharp enough to keep you up at night.

Hard problems, he said, are like distant locations you would hike to. You make the journey. You lay down trail markers. You make maps that others can follow. AI tools are like taking a helicopter to the site. You get the destination. You miss everything in between. And the destination, it turns out, was only part of the value.

He was talking about mathematics. The observation applies rather more broadly.

What the research found

In June 2025, researchers at MIT Media Lab published a study tracking 54 participants across four months of writing tasks — some using ChatGPT, some using search engines, some relying only on their own minds. They measured cognitive engagement via EEG, analysed the outputs for originality, and interviewed participants afterwards.

The results were uncomfortable. ChatGPT users showed weaker neural connectivity, lower memory recall, and reduced originality. They also reported the lowest sense of ownership over work they had nominally produced — and struggled to recall or explain the content of their own essays. The researchers named this effect cognitive debt: the cumulative cost of outsourcing thinking before you have done any yourself.

Two findings are worth holding. First, sequence matters more than tool use. Participants who attempted the task independently before turning to AI retained far stronger cognitive engagement throughout. Those who reached for the tool first struggled to think clearly even after it was removed. The habit of bypassing initial effort left them, in effect, cognitively unmoored.

Second, the AI-assisted essays converged. Similar structures, similar framings, similar conclusions. The brain-only group produced more diverse — and more original — work.

The study is not yet peer-reviewed and its sample is modest. But the directional finding is hard to dismiss, particularly when you scale it from a student writing an essay to a senior team making a consequential decision.

What this looks like in practice

Many senior leaders now arrive at critical discussions having received an AI-synthesised brief. The key themes extracted. The options surfaced. The meeting starts faster, which everyone agrees is a good thing.

The efficiency gain is real. So is what gets traded away.

The unstructured thinking time — reading the original material, forming a first instinct, sitting with what does not quite fit — has been skipped. Which means the leader in the room has a position, but may not have a view. These are not the same thing. A position can be summarised. A view can be defended.

Tao’s metaphor is useful here. The leader who made the journey can orient others — explain the terrain, mark the dead ends, describe what the approach looked like before the destination came into view. The leader who took the helicopter can describe the destination. Useful. But not the same thing, and under pressure the difference surfaces quickly.

There is a further risk at the team level. If several members of a senior team are using similar tools, with similar prompts, trained on similar data, to prepare for the same conversation — the diversity of input to that conversation is lower than it appears. They may disagree vigorously. They may be disagreeing within a narrower frame of possible thought than any of them realise, because their thinking was shaped upstream by the same source.

A leadership team that looks analytically equipped but is drawing on a shared synthetic feed is not the same as a leadership team that has genuinely done its own thinking. It just looks that way until something genuinely hard arrives.

The question worth sitting with

The argument is not that AI has no place in leadership work. It demonstrably does. The question is about sequence.

The MIT research is clear: those who think first and use AI second retain both cognitive engagement and ownership of their own conclusions. The debt accumulates when the tool becomes the entry point — when AI is where thinking starts rather than where it gets sharpened.

A useful test: could you reconstruct the core of your reasoning without the AI output? If the position you hold in a meeting is, in substance, the position the tool gave you — something has been skipped. Not illegally. Not catastrophically. But consequentially.

The discomfort of wrestling with a hard question, arriving at an uncertain view, and defending it in a room full of intelligent people who disagree is not a problem to be optimised away. It is the job.

The helicopter gets you there. The hike is what makes you useful when you arrive.