The Great Remaking: Why Fast Following Does Not Work When the Gap Compounds

In the first article in this series, I argued that organisations redesigning their work around AI are building compounding advantages over those still augmenting the status quo — advantages already measurable in total shareholder return and revenue growth. The second article traced what the remaking of the essence of work looks like dimension by dimension, through thinking, deciding, creating, and delivering. Both articles left a question open: why does the advantage between those who redesign and those who do not keep growing — and why can’t a late mover simply invest their way out of that gap?
That is the question this article addresses. The answer matters because Boards routinely deploy fast-follower logic as a reason not to move first. The reasoning is familiar: let early movers absorb the risk, observe what works, then acquire or build the equivalent capability at lower cost.
Why fast followers could catch up before — and cannot now
In every previous technology wave — desktop, internet, web, mobile, cloud — being a fast follower was a defensible position, because what early movers adopted was primarily a product or platform. It was observable, replicable, and improvable by a well-capitalised follower who skipped the validation cost and built correctly. The compounding gap of The Great Remaking is not built from a product. It is built from operational accumulation — proprietary data, experiential capability, and embedded process knowledge — that has no artefact to study and no design to replicate. You cannot build it from first principles. You can only build it over time.
In the internet era, a retailer that built an e-commerce channel in 1997 had a genuine advantage over one that built it in 2001. But by 2003, the late mover could deploy the same platforms, acquire the same operational knowledge, and close the gap through sustained investment. The early mover’s advantage was real but replicable — the technology was purchasable, the knowledge of how to deploy it became progressively more available, and the gap did not compound in a way that made it structurally irreversible.
The same logic applied in each subsequent wave — and cloud accelerated it. When Amazon Web Services opened access to enterprise-grade infrastructure in 2006, the gap between early movers and late movers narrowed further still: a company that had spent three years building data centres could be matched by one that had spent three months and a credit card. Cloud did not just make technology purchasable — it made it rentable by the hour, removing even the capital barrier that had previously slowed late movers down. The gap was a technology gap, and technology can be purchased. In 2026, that logic no longer holds.
Speaking on the Moonshots podcast with Peter Diamandis in January 2026, Elon Musk offered a historical analogy that illuminates the structural break with precision. He described the era when “being a computer was a job” — buildings full of people performing calculations by hand. When the spreadsheet arrived, he observed, “one laptop with a spreadsheet can outperform a skyscraper of several hundred human computers.” The critical observation concerned partial adoption: “if even a few cells in that spreadsheet were done manually, you would not be able to compete with a spreadsheet that was entirely a computer.” The analogy makes visible something about mixed adoption that is easy to understate. Partial integration of a transformative capability is not a stable competitive position. It is a temporary one, and the gap between partial and full integration widens as the capability matures. Organisations currently augmenting existing workflows with AI rather than redesigning them around it are, in this sense, running a few cells manually.
The compounding gap of The Great Remaking differs from previous waves not because the technology is superior but because the source of advantage is different. BCG’s finding that only 10% of AI value comes from algorithms, 20% from the technology required to implement them, and 70% from redesigning the people component is the structural evidence that settles the question. If 70% of the value is in redesign, then 70% of the gap cannot be closed by purchasing better technology. What remains is a systems gap built from proprietary data shaped by AI-integrated workflows, human capability developed through sustained practice, and institutional learning embedded through iterative redesign — none of which can be purchased from a vendor, replicated through a single transformation programme, or accumulated in any way other than time spent doing the work differently.
Three loops built from that operational accumulation explain why — data, talent, and process redesign — and each one compounds in a different way.
The first loop: data
The data loop is the most mechanically concrete of the three. Organisations that have redesigned their workflows around AI generate higher-quality, better-structured data than those still using AI as a bolt-on to existing processes — and that data, when fed back into the organisation’s AI systems, makes those systems more effective, which in turn generates better data. This is a self-reinforcing loop with no equivalent in previous technology waves. It is also the loop most frequently misunderstood: the competitive advantage is not data volume but data provenance.
Consider what AI-integrated workflows actually produce across the four dimensions of work. An organisation that has spent eighteen months redesigning its research and analytical functions around AI has created a proprietary feedback loop between output and institutional knowledge. The system has learned what questions the organisation asks, what contextual knowledge shapes interpretation, and what formats are operationally useful. An organisation that has integrated AI into its decision-making processes — credit assessment, risk modelling, operational planning — has accumulated a richer decision history than one that has relied on human judgement alone: more decisions, better documented, with outcome data that informs the next iteration of the model. An operation running predictive systems across its logistics or manufacturing environment for two years has a model refined against two years of real-world variance. In each case, the data asset that matters is not the raw data — it is the accumulated, contextualised, AI-refined version of it, shaped by the act of working differently, that cannot be acquired from outside.
This is precisely the distinction that fast-follower logic fails to account for. A late mover can purchase AI capability, cloud infrastructure, and even talent. It cannot purchase the eighteen or twenty-four months of operational data generated by AI-integrated workflows that a redesigned organisation has accumulated. The data moat is not about having more data. It is about having data that only exists if you have been redesigning how the work gets done. Raw data is available to any organisation. Institutional data shaped by AI-integrated operations only exists in organisations that have been building it.
The direction of travel makes this more consequential, not less. The Stanford HAI AI Index 2025 documents that the performance gap between the top and tenth-ranked AI models narrowed from nearly 12% to just 5.4% in a single year, and that inference costs for frontier-level capability fell 280-fold between 2022 and 2024. As model capability commoditises and near-frontier performance becomes available at minimal cost to any organisation, the source of AI competitive advantage shifts decisively — away from which model you access and towards what you have built around it. The organisations that have been constructing that data foundation for two years are not simply ahead on a linear scale. They hold a structural position that no procurement decision can replicate, because the asset does not exist until you have been doing the work differently — and it only deepens with time.
The second loop: talent
The talent loop is less immediately visible than the data loop, but it is arguably more durable. People who work alongside well-designed AI systems in genuinely redesigned workflows develop capabilities that cannot be replicated in organisations still using AI as an augmentation layer. These are not primarily technical skills — not prompt engineering or model configuration, although those matter at the margins. They are the deeper competencies that develop through sustained, high-quality human-AI collaboration: knowing when to trust AI outputs and when to challenge them; how to frame problems for effective AI assistance; where human judgement adds most value in a hybrid workflow; and how to interpret AI recommendations in the light of institutional context that the AI does not have. These competencies are tacit, experiential, and accumulate over time.
PwC’s 2025 Global AI Jobs Barometer, based on analysis of close to a billion job advertisements from six continents, captures the economic signal of this divergence. Workers with AI skills command a 56% wage premium over equivalent workers without them — up from 25% the prior year — and AI-exposed industries are generating three times the revenue growth per employee of those with the lowest AI exposure. These are signals of genuine value being created through AI-integrated work. But the same report names the structural constraint precisely. Peter Brown, PwC’s Global Workforce Leader, observed that “this is not a situation that employers can easily buy their way out of. Even if they can pay the premium required to attract talent with AI skills, those skills can quickly become out of date without investment in the systems to help the workforce learn.” The observation precisely identifies the problem. You cannot hire or train your way to the embedded, operationally-tested capability that only develops through sustained integration.
BCG’s data that 88% of managers in future-built organisations actively role-model AI use and incorporate it into daily operations, compared with just 25% at laggards, is a directional measure of this divergence — but it understates the structural depth of the gap, because it measures current observable practice rather than the accumulated experiential capability underpinning it. The 63-percentage-point gap in management behaviour is visible. The gap in the operational knowledge that has developed through years of AI-integrated work is not directly measurable, but it is real, and it widens with every month that one group continues building it while the other does not.
The talent loop also has a second-order effect that the data loop does not. Organisations that visibly lead on AI-integrated work attract further talent that is already developing these capabilities — creating a compounding dynamic at the organisational level that mirrors the compounding at the individual level. The talent moat is not only about what people have learned. It is about who chooses to work there next.
The third loop: process redesign
The third loop is the most abstract of the three, and arguably the most durable. Organisations that redesign workflows around AI do not simply produce better processes — they develop institutional capability for redesign itself. They accumulate knowledge of what redesign looks like in their specific context: what cross-functional collaboration it requires, where it produces the most value, how to sequence it across the organisation, and what governance structures make it sustainable. Each redesign cycle makes the next one faster and more effective, because the organisation has developed the knowledge, relationships, and structural capacity that redesign demands. The loop compounds not just operational productivity but organisational learning capacity.
BCG’s research on AI at work finds that organisations which redesign workflows also learn more — about what works, where AI creates most value, and how to integrate human judgement with machine capability — than those treating AI as a productivity layer. That learning is a competitive asset that organisations still treating AI as a bolt-on tool never acquire, because bolt-on deployment does not require the workflow interrogation, cross-functional redesign, and iterative refinement that generates it. An organisation that has completed several redesign cycles has an institutional knowledge base about the practice of redesign that is as proprietary as its operational data. It cannot be purchased, and it cannot be accelerated by external consultants who lack the organisation’s operational context.
Unlike data, which requires systems to accumulate, or talent, which requires time to develop, redesign capability requires only the repeated decision to do it — but that decision, made consistently over two or three years, produces an organisational capacity for change that a late mover cannot replicate from a standing start.
The compounding effect of all three together
What makes the three loops analytically important is not their individual properties but their interaction. The data loop, the talent loop, and the process redesign loop do not operate independently. Richer data enables better AI performance, which enables more sophisticated human-AI collaboration, which develops talent more effectively. More capable talent designs better processes, which generates better-structured data. Better processes create cleaner feedback loops that further refine both data and the capability of the people working within them. These are the dynamics that systems thinking predicts: interconnected improvements that reinforce one another, producing emergent organisational capability that no single component could generate alone. This is why the compounding gap does not narrow over time as late movers invest more — the leaders’ three loops are accelerating precisely as the late movers begin to move.
The window, and what it means for Boards
Understanding the three loops changes the nature of the strategic question. A Board that asks “how much AI are we deploying?” is measuring the technology layer — the roughly 30% of available value that is the most replicable component of the gap and the least predictive of long-term competitive position. The question that maps onto the compounding loops is different: are we building any of them?
Specifically: are our workflows generating proprietary data that compounds into AI advantage, or are they generating the same data they always have, with AI applied as a filter on top? Are our people developing AI-integrated capabilities through operational experience — the kind that builds through sustained practice — or through training programmes that convey the knowledge without replicating the experience? And are we developing institutional capability for redesign itself — the ability to redesign workflows repeatedly, improve with each cycle, and embed what we learn — or completing individual pilots and returning to the status quo?
These are specific, answerable questions, and the three loops do not require a completed transformation before they start operating — they require a design decision: to restructure a workflow rather than augment it. That decision is available to any organisation today, but its value compounds from the moment it is made and depreciates with each month it is deferred. The organisations already making it are not simply ahead. They are ahead and accelerating.
Understanding the mechanism of compounding is what makes that distinction real — and it is what the board-level diagnostic this series addresses next must be built on.
Let's Continue the Conversation
Thank you for reading about why the compounding gap between AI redesigners and fast followers widens with time rather than closing. I'd welcome hearing about your Board's experience with this tension — whether you're wrestling with the decision to redesign workflows rather than augment them, finding that the data and talent your organisation has accumulated is already creating an advantage you hadn't fully measured, or discovering that the fast-follower logic that served you well in previous technology waves is harder to apply here than expected.




