Part 3 of 5 | Flyntrok Blog Series: When the Tool Changes
Five hundred and forty-two. That is the number of leaders currently on development journeys through Flyntrok’s platform — a proprietary tool that weaves organisational context with nudge theory to keep leaders engaged in their growth over time. Across those plans, one pattern is nearly universal. Strategic and long-term thinking shows up, again and again, as the competency that needs the most development focus.
This is not just what we see internally. DDI’s Global Leadership Forecast 2025, spanning over 50 countries and 24 industry sectors, finds that leaders themselves name setting strategy as their greatest development need — with only 22% of HR teams currently prioritising it. The Centre for Creative Leadership reports the same finding. The gap is persistent, and it is widespread.
Here is what struck us, though. The leaders on our platform are capable and successful. This is not a capacity problem. So, we began to ask a different question: what if strategic and long-term thinking is not underdeveloped because leaders cannot do it — but because the conditions around them have made it easy not to? And what if AI is about to change that?
Two Ends of a Spectrum
Picture a spectrum.
At one end is execution: tasks, outputs, deliverables — things you can point to at the end of the day and say, I did that.
At the other end is insight: judgment, sense-making, sitting with a problem that does not have a clean answer, working with the complexity of people and context.
Most leaders, when asked honestly, know which end of that spectrum absorbs most of their time. And most know their role asks for more of the insight end than they typically give it.
The powerful question is not whether leaders know this. It is why execution pulls so naturally ahead.
Why Execution Pulls Ahead
Execution has a lot going for it. It is urgent, visible, and satisfying. It produces something tangible. And in most organisations, the systems — the metrics, the meeting rhythms, the performance conversations — are built around outputs. As Cal Newport has observed, organisations tend to measure and reward what is easy to see: tasks completed, emails answered, decisions made. One thoughtful strategy conversation is harder to point to than ten smaller problems resolved.
Insight work asks for something different. It involves sitting with questions that do not resolve quickly. It requires comfort with uncertainty, and it can feel exposed in a way that execution rarely does. Not because leaders lack the capability — but because the conditions for that kind of thinking have not always been built into the working day.
The systems were not designed to be hostile to strategic thinking. They were designed around what was easiest to see and measure. And that quietly shaped what got prioritised.
What AI Is Changing — And What It Is Really Asking
This is where generative AI becomes genuinely interesting. Not just as a productivity tool, but as something that surfaces a deeper question about role.
Generative AI is compressing the execution end of the spectrum. Reports, emails, data summaries, routine analysis — tasks that once took hours now take minutes. When that compression becomes large enough to notice, something shifts. Not just on your calendar. The question of what you are really here to do starts to surface.
The choices made at that point will define the role. Whether the recovered time gets filled with more execution, or invested in judgment calls, sense-making, and harder conversations — that is an identity question as much as a time management one.
The shift AI is prompting is not just a behaviour change. It is an invitation to recraft the role — to move from being the person who gets things done to being the person who thinks about what is worth doing and why.
Recrafting the Role
What lives at the insight end of the spectrum is not harder work. It is more human work.
MIT Sloan researchers recently mapped the capabilities AI is least able to replicate, naming them the EPOCH framework: Empathy, Presence, Opinion and Judgment, Creativity, and Hope. Significantly, they refuse to call these soft skills. A hard skill like solving a maths problem, they point out, is comparatively easy to teach. EPOCH capabilities are far harder to develop — and far more distinctly human.
These are precisely the capabilities that define the insight end of the spectrum. The ability to read what is really going on in a room. To make a call when no option is clean. To sit with a team navigating change and offer something more useful than a process. None of this is new to leadership — but for many leaders, it has been the part of the role that got squeezed by everything else that needed doing.
AI is not asking leaders to become different people. It is creating the conditions for them to become more fully the leaders they were already meant to be.
Recrafting the Organisation
This question does not sit with individuals alone. It is also an organisational design question.
If generative AI is compressing execution work, but the systems still measure and reward execution-style outputs, the shift will not happen at scale — regardless of individual intention. Organisations need to ask honestly: is there genuine space for ambiguity and strategic conversation? When insight-level thinking does happen, is it seen, valued, and reflected in how performance is understood? Redesigning what gets measured is as important as redesigning what gets done.
An Invitation
The capacity for judgment, sense-making, and strategic thinking has been there all along. What generative AI is changing are the conditions. With execution taking less time, the question of what to do with the space that opens becomes both more urgent and more possible to answer well.
That feels, to us, like an opportunity worth taking seriously.
This is the third post in a five-part series on how generative AI is redefining professional roles — not just making us faster. Read the opening post here: When the Tool Changes, But the Real Shift Is About You (Link).
