FOMAI: The Fear of Missing AI

Feb 24, 2026

Technology & Change

By Kavi Arasu

At the AI Impact summit in India in February 2026, a university was asked to vacate its exhibition stall. The robotic dog it had displayed as an in-house innovation — named “Orion,” attributed to its Centre of Excellence — turned out to be a commercially available Chinese model. The claim unravelled quickly, as these things tend to do now, under the unforgiving clarity of the internet.

Writing about the incident in the Indian Express, Srinath Sridharan gave the underlying condition a name: FOMAI. The fear of missing AI. “The anxious rush,” he wrote, “to appear AI-ready before becoming AI-capable.”  

The term is precise. And it does not belong only to summits or universities.

It is worth sitting with that image for a moment — the escorted dog, the stall, the claim that came apart — before moving to where the same pattern plays out every day, quietly, inside organisations.

The Pressure to Appear Ready

Across organisations right now, there is enormous pressure to look AI-ready. Boards are asking. Markets are watching. Nobody wants to be the leadership team that missed it.

So we see familiar moves. AI strategies get announced. Innovation labs get set up. Dashboards get built. Presentations get a new vocabulary.

And then — not much changes.

The technology moves. The organisation waits.

This is not dishonesty. Leaders are responding to real pressure. The incentive to signal readiness is strong and immediate. The work of actually building it is slower, less visible, and rarely applauded. So the signalling tends to run ahead of the substance.

FOMAI, in other words, is not unique to exhibition stalls. It is a pattern that plays out in boardrooms, strategy decks, and quarterly reviews the world over.

When the Light Comes On

There is something important that happens when you put new technology into an old system. The technology does not just add capability. It adds visibility.

And visibility is not always comfortable.

We worked with a team that was moving from Excel sheets to a live dashboard as part of an organisation wide digital transformation initiative. The goal was straightforward: better data, faster decisions. But when the dashboard went live, something unexpected happened. Every gap in the data became visible. Every workaround, every missing field, every number that had been estimated rather than measured — all of it was now on screen, in real time, for anyone to see.

Senior management began commenting on the gaps. Publicly, in meetings, repeatedly. Within weeks, employees sidestepped the dashboard.  It felt safer to go back to the spreadsheet, where the mess was private.

The technology had not failed. But the adoption collapsed — because the organisation did not yet have the capacity to change in response to what the technology revealed.

This is what AI will do, at much larger scale. It will not just automate. It will illuminate. And if the response to illumination is defensiveness rather than curiosity, the organisation will pull back from the very thing it announced with such confidence.

Three Times It Went Differently

This is not inevitable. Some organisations have handled it well. What they share is a willingness to fix the foundations before declaring the transformation.

Mayo Clinic spent years building what it calls a data-centric approach before deploying its AI systems. Rather than training models on whatever data existed, it invested in anonymised, domain-specific, carefully curated patient data first. The result was an AI system that outperformed general-purpose models in clinical settings. The announcement came after the work. Not before.

A global logistics company — cited in a World Economic Forum study — took a different approach to scaling. Instead of rolling out AI tools across the entire network at once, it established small digital knowledge-sharing mechanisms. Successful implementations from one distribution centre were studied, adapted, and only then deployed elsewhere. Local teams were given room to customise. Adoption followed naturally, because the organisation had built the conditions for it.

Ford tried something closer to the island model in 2016, setting up a separate unit called Ford Smart Mobility to lead its digital transformation. The unit operated at a distance from the rest of the business, reported a loss of around $300 million, and struggled to create change that travelled back into the core organisation. Ford learned from this. It eventually rebuilt its approach from the inside — integrating the work rather than quarantining it — and went on to become a serious player in the electric vehicle industry. The lesson was not that transformation is impossible. It was that transformation housed on an island tends to stay there.

What AI Amplifies

There is a simple truth underneath all of this.

AI amplifies what already exists. If the organisation is adaptive, AI accelerates value. If the organisation is rigid, AI makes the rigidity visible faster and at higher cost.

A 2025 MIT studyThe GenAI Divide: State of AI in Business, drawn from interviews with 52 organisations and analysis of over 300 AI deployments — found that 95% of enterprise AI pilots delivered no measurable bottom-line impact. The primary reason was not the technology. It was the failure to integrate AI into actual workflows and redesign the processes around it before deploying.

We repeatedly find in our work that the technology is not the constraint. The organisation is. More precisely: the organisation’s capacity to change is.

This shows up in specific ways. Data that has never been cleaned cannot suddenly power a model. Decision processes that require three levels of approval cannot suddenly operate at the speed AI enables. Teams that have been rewarded for output rather than learning cannot suddenly become the experimental, curious, fast-adapting units that AI adoption requires.

None of this is a reason not to move. It is a reason to be honest about what the move actually involves.

What Can You Do?

Not everything needs to be solved at once. But the sequence matters more than most leaders acknowledge.

The most useful question is not “What is our AI strategy?” It is “Where are we structurally unready to change?” That question tends to produce more honest answers, and it points directly to the work that will determine whether the strategy lands or stalls. It is also, in our experience, the question most leadership teams have not yet sat with long enough.

Data quality comes before model selection. Decision flow comes before dashboards. Incentive structures come before announcements. These are not glamorous priorities. They do not produce good slide decks. But they are the difference between AI that creates value and AI that creates a very expensive demonstration of why the foundations needed fixing.

The island model — a dedicated AI team, separate from the rest of the business — is a reasonable starting point for experimentation. It is a poor mechanism for transformation. If the learning stays in one team, the organisation does not change. The capability needs to travel.

And the measure that matters is not whether the pilot worked. It is whether the organisation is behaving differently. Pilots are easy to declare successful. Genuine behavioural change is harder to fake, and it is the only thing that scales.

Back to the Dog

Srinath Sridharan ended his column with a quiet warning: the robodog incident will be forgotten soon enough, but the weakness it exposed should not be.

He was writing about ecosystems. The same warning applies inside organisations.

The robotic dog at that summit was not a failure of technology. It was a demonstration — built to show what was possible, staged to signal capability, presented as evidence of something that had not quite been built yet. The claim came apart not because the dog stopped working, but because the gap between what was said and what existed became visible.

Inside most organisations right now, there is a version of that dog. A capability that has been demonstrated but not yet embedded. A pilot that worked in controlled conditions but has not yet changed how the organisation actually operates. An announcement made with confidence, now sitting quietly in a corner, waiting for the foundations to catch up.

The question worth asking is a simple one: what is doing the work — the demonstration, or the capability behind it?

The organisations that will do well with AI are not necessarily the ones that moved fastest. They are the ones that built honestly. That fixed what the light revealed rather than switching it off. That treated the discomfort of visibility as useful information rather than a problem to manage. That understood, before anything else, that technology adoption is a change problem — and that capacity to change is the thing worth building.

They are the ones where, if a robotic dog walked in, it would have somewhere real to go.