Pitfall #1: How You Position AI Matters

Feb 01, 2026

Technology & Change

By Deepa Premkumar

A business leader presenting AI while employees fill in the blanks with fear, fragmentation, and disillusionment — satirical cartoon illustrating why how you position AI matters

Most organisations do not position AI deliberately.

They let the positioning happen by default — through scattered comments in town halls, through budget discussions focused on “efficiency savings,” through opaque communications that leave employees guessing.

And here is the thing about human beings: we are meaning making machines. We fill in the blanks. Always. And almost always negative thoughts race for mindshare. When leadership is silent about what AI means for people’s roles, employees do not assume the best. They assume the worst.

By the time leadership realises they have a positioning problem, the narrative is already set. The watercooler conversations have happened. The fear — or the confusion, or the cynicism — has taken root.

“Silence is not neutral. In the absence of a clear narrative, your organisation will write its own. And you will not like the draft.”

Here are three ways leaders get the AI narrative wrong — often with the best of intentions.

 

 1. The Threat Narrative: “AI Will Make Us More Efficient”

This is the most visible mistake. Leadership introduces AI using language like “headcount optimisation,” “cost rationalisation,” or “efficiency savings.” Sometimes they do not use those exact words, but employees hear the subtext loud and clear.

The intention is often genuine — leaders are under pressure to show ROI and they frame AI in terms the board and investors understand. But the moment employees hear “efficiency,” they translate it to “fewer of us.” Fear triggers the amygdala. People stop sharing their work. They resist the tool. They hide the very innovations that could make AI work. The organisation’s collective intelligence drops precisely when it needs to rise.

“If you buy AI to replace people, you buy a ceiling. If you buy AI to augment people, you buy a sky”.

 

  1. The Solo Narrative: “Here Is AI — Have Fun”

This mistake looks progressive. Leadership rolls out a generative AI tool with an encouraging message — “Experiment! Explore! Play!” — and no further asks. No expectations around sharing learnings. No structures for collective discovery or  definition of what “good” looks like. Or they frame it purely as a personal productivity boost: “Here is your copilot — it will save you two hours a day.”

The intention is admirable: give people space, do not be controlling, let innovation emerge organically. But what actually happens is fragmentation. Every employee experiments in isolation. Someone in marketing discovers a brilliant prompting technique. Another employee in finance builds a workflow shortcut. Operations finds a use case that could save thousands of hours. None of them know about each other. The organisation stops at individual productivity and does not think about the bigger picture — how all of this moves the organisation forward as a whole.

Six months later, you have 200 individual AI users. You do not have an AI-capable organisation.  And the gap between those two things is where transformation dies quietly.

 

  1. The Magic Narrative: “AI Will Transform Everything”

This is the hype trap. Leadership, genuinely excited by what they have seen in demos and pilot programmes, oversells AI internally. “This will revolutionise how we work.” “This is the biggest change since the internet.” The expectations are sky-high.

Then reality hits. AI hallucinates a legal citation. It produces a mediocre first draft that needs heavy editing. It confidently generates numbers that are completely wrong. The gap between the promise and the lived experience creates a specific kind of disillusionment — one that is harder to recover from than scepticism, because people feel they were sold something that does not exist.

This connects directly to Pitfall #7 later in this series: organisations that overpromise on AI are the ones most likely to give up too quickly when early results underwhelm.

 

The Klarna Pivot: When the Threat Narrative Hits a Wall

In early 2024, Klarna grabbed headlines by announcing that its AI assistant was doing the work of 700 agents. While the markets cheered the $40 million in savings, the internal culture felt the chill. Employees became hesitant to innovate — why would you experiment with a tool that might optimise you out of a job? Customers missed the nuance of human empathy. By 2025, the “cost-first” narrative had reached its limit. Klarna had to pivot, eventually rebranding the initiative as a “human-plus-AI” strategy to regain the trust of both its workforce and its customers.

The lesson: the threat narrative might win a quarter. It does not win a transformation.

 

The Zapier Counter-Example: When Positioning Creates Collective Intelligence

Zapier’s CEO Wade Foster took the opposite approach. In early 2023, he issued a “Code Red” memo positioning AI unambiguously: this was about capability expansion, not headcount reduction. But Foster did something beyond just getting the narrative right — he created structures for collective learning. A #fun-ai Slack channel where teams shared experiments. Public sharing of failures, not just successes. Leaders modelling the behaviour visibly, admitting mistakes, treating AI stumbles as learning opportunities.

The framing was consistent: AI helps you do more of the work you love and less of the work you hate. And critically, the ask was not just “use AI” but “use AI and tell us what you are finding.”

Results: 65% adoption within six months, climbing to 89% by spring 2025. The difference was not the technology. It was that Zapier built an AI-capable organisation, not just a collection of individual AI users.

 

How to Break Into the Next Orbit:

Here are three ideas for getting the positioning right:

  •  Position AI as augmentation from day one. If headcount changes are coming, separate that conversation entirely from AI adoption. The moment AI and job cuts share the same sentence, you have lost the narrative.
  • Create structures for collective learning, not just individual access. Shared Slack channels, regular show-and-tell sessions, cross-team learning sprints. Make the ask explicit: we do not just want you to use AI — we want you to share what you are discovering so the whole organisation gets smarter.
  • Set realistic expectations and model the behaviour. Leaders must use AI visibly, share their own struggles with it, and talk openly about where it excels and where it falls flat. The goal is calibrated trust, not blind faith — and that starts at the top.

 

It is not just what you say about AI that matters. It is what you ask of your organisation in return. Positioning without expectation produces fragmentation. Positioning with shared asks produces collective intelligence.

 

This is post 2 of 12 in a series on the organisational pitfalls of AI adoption. [Read the series overview here.] Next up: Pitfall #2 — what happens when organisations buy AI tools before they have defined the problem those tools are supposed to solve.