Cookie Consent

I use cookies to understand how my website is used. This data is collected and processed directly by me, not shared with any third parties, and helps us improve our services. See my privacy and cookie policies for more details.

The AI Maturity Mirage: Diagnosing the Gap Between Investment and Readiness

Llantwit Major | Published in AI and Board | 11 minute read |    
A glass-walled boardroom at dusk showing executives reviewing glowing data visualisations, with the window reflection revealing fragmented metrics and red indicators to illustrate the gap between perceived and actual AI maturity (Image generated by ChatGPT 5)

Boards reviewing AI progress often see a promising landscape of multiple pilots underway, tools adopted across teams, and early efficiency wins emerging from departments eager to demonstrate value. This view is potentially misleading, because visible activity bears little correlation to genuine organisational capability.

Larridin’s State of Enterprise AI 2025 report exposes the scale of this disconnect: 89% of enterprises have adopted AI tools, yet only 23% can accurately measure their return on investment. This measurement gap represents the AI maturity mirage — the systematic overestimation that derails transformation strategies and destroys competitive positioning.

Gartner’s research shows high-maturity organisations sustain value creation for three years or more and report 4× higher stakeholder trust than their low-maturity counterparts. The difference between these outcomes and stalled transformation is not technology capability, it is honest self-assessment.

The pattern appears across sectors and regions alike. AI Digital Labs finds two-thirds of advertising agencies remain stuck in discussion or ad-hoc experimentation, with only 16% embedding AI systematically. McKinsey-LIMRA’s insurance analysis shows life insurers averaging just 2.8–3.2 out of 5 on maturity assessments, with fewer than 20% achieving scale.

IBM and Ecosystm’s 2025 APAC research shows 85% of organisations claim data-driven or AI-First status whilst only 11% demonstrate true readiness — a pattern that mirrors Western markets and suggests the overestimation is structural rather than cultural.

Beneath these headline statistics lies a deeper problem. PwC’s Global Workforce Hopes and Fears Survey 2025 shows 54% of workers have used AI in the past year, yet anxiety and uncertainty about its impact is spiking among employees, whilst Deloitte notes 63% of organisations remain unprepared for regulatory requirements. Surface-level tool adoption masks fundamental gaps in readiness across the Five Pillars that determine whether AI investments translate into sustained capability.

The organisations shaping AI transformation share a common characteristic: honest self-assessment that distinguishes genuine capability from comfortable illusion. Understanding how the mirage forms — and how to pierce it — begins with recognising the three patterns that create systematic overestimation.

Understanding the gap: three patterns of overestimation

Three reinforcing patterns create the AI maturity mirage, each compounding the others to produce systematic overestimation across sectors and geographies.

The first pattern is the tool-centric illusion, where Boards count AI tools deployed — generative AI in marketing, chatbots in customer service, automation in operations — as evidence of maturity. Without integrated infrastructure, however, these deployments create capability silos rather than organisational transformation. Deloitte’s 2025 research shows 60% of AI leaders cite legacy integration as their primary barrier, indicating that tool deployment without infrastructure coherence creates fragmentation, not advancement. In advertising specifically, AI Digital Labs reports 53.6% of agencies lack licensed AI tools, meaning apparent adoption often relies on ungoverned consumer applications that cannot scale.

The second pattern is the pilot success trap, where isolated wins create the seductive appearance of advancement. When a marketing team demonstrates productivity improvements or operations shows cost savings, Boards naturally assume the organisation has progressed toward genuine capability — yet the multi-speed adoption reality of AI tells a different story. Different functions sit at different maturity stages simultaneously, and success in one area reveals nothing about systemic readiness. McKinsey-LIMRA’s 2025 insurance research notes fewer than 20% of insurers achieve scale, with high variability in risk mitigation and data governance capabilities even among organisations claiming advanced status. PwC’s 2025 Responsible AI Survey correlates genuine maturity with 30–40% faster innovation cycles — a compound effect that isolated pilots cannot deliver. The shadow AI paradox I explored previously compounds this trap further: informal successes often drive the wins that suggest maturity whilst masking the systemic governance gaps that prevent scaling.

The third pattern is hype-driven assessment metrics. When Boards focus on short-term ROI from AI investments, they inadvertently reinforce the overestimation. McKinsey’s 2025 analysis identifies leadership alignment as the real bottleneck — not technology capability. The AI Digital Labs research crystallises this dynamic: agencies rate AI criticality at 8.1 out of 10 but embed it in only 16% of operations. The gap between perceived importance and actual integration defines the mirage condition.

The pattern extends beyond perception to measurement itself. AI Digital Labs reports 46.4% of agencies don’t track AI impact at all, creating retrospective bias where unmeasured successes inflate perceived maturity. People-centric barriers compound the problem: 60.7% cite skills gaps, 51.8% lack dedicated time for AI work, and only 26% run formal upskilling programmes — capability deficits that tool-counting metrics cannot reveal.

These three patterns reinforce each other in ways that compound overestimation: tool deployments create visibility without capability, pilot successes generate confidence without scalability, and hype-driven metrics validate both without accuracy. The result tends toward widening leader-laggard spreads where organisations with accurate self-assessment pull further ahead whilst those trapped in the mirage stall. The telling indicator: 50% of insurers take over a year to scale MVPs — a timeline that signals Experimenting-stage capability regardless of Board perception.

These patterns directly contradict balanced Five Pillars development. Organisations experiencing the mirage typically show tool adoption representing partial Technical Infrastructure investment without corresponding Governance and Accountability, People, Culture and Adoption, or Value Realisation maturity. The imbalance becomes self-perpetuating as continued investment in visible deployments diverts resources from the foundational capabilities that enable genuine advancement.

Diagnostic framework: piercing the illusion

Piercing the AI maturity mirage requires structured assessment that moves beyond tool inventories and pilot counts to reveal genuine organisational capability. The diagnostic approach combines mapping the AI Stages of Adoption (AISA) with Five Pillars evaluation to expose true position and identify targeted remedies.

The first step is mapping multi-speed reality across AISA stages. Honest assessment begins with acknowledging that different business functions sit at different maturity stages simultaneously. Rather than assigning a single organisational stage, Boards should map each function independently to one of the five AISA stages: Experimenting, Adopting, Optimising, Transforming, or Scaling. The common error is assuming uniform progress; the reality reveals stark variations — operations may show Adopting-stage governance whilst marketing remains in ungoverned Experimenting, and finance may barely qualify as Experimenting despite Board assumptions otherwise. A revealing indicator emerges from pilot progression: if initiatives consistently fail to scale within six months, the organisation likely remains in Experimenting stage regardless of investment levels or executive communications. The multi-speed reality is not a problem to solve but a condition to acknowledge and manage.

The second step is evaluating Five Pillars balance. True maturity requires balanced capability development across all five domains, not concentrated investment in one or two areas. For each function mapped in the previous step, Boards should assess capability levels across all pillars. Governance and Accountability reveals whether minimum lovable structures exist or governance remains ad-hoc and reactive. Technical Infrastructure assessment should address data readiness directly—recall that the majority of organisations estimate their data isn’t AI-ready. Operational Excellence evaluation should examine whether performance can be sustained consistently across units, or whether extended MVP-to-scale timelines indicate capability gaps that prevent reliable operation. Value Realisation and Lifecycle Management assessment should determine whether metrics are balanced beyond efficiency, capturing innovation and customer value alongside cost reduction. People, Culture and Adoption evaluation should examine training programmes against capability gaps, recognising that formal upskilling remains the exception rather than the norm. Imbalance across these pillars indicates overestimation regardless of individual pillar strength.

The third step is testing with balanced indicators, because comprehensive maturity assessment requires three indicator types working together. Leading indicators signal future value potential through measures such as engagement rates, prototype velocity, and training completion, whilst lagging indicators confirm past value creation through sustained ROI, efficiency gains, and capability persistence. Predictive indicators model future scenarios using the kind of capabilities that distinguish advanced AI deployment. The diagnostic red flag is over-reliance on lagging indicators — or failing to track impact at all — which creates retrospective bias that inflates perceived maturity. Organisations walking towards the mirage typically measure what has happened without monitoring what capability exists to make it happen again at scale.

The implementation approach should prioritise quarterly diagnostics with external validation rather than relying solely on internal assessment. Focus diagnostic energy on capability gaps rather than confirming strengths—particularly people-centric barriers in creative and customer-facing functions, legacy integration challenges, and governance structures that remain ad-hoc rather than comprehensive. Building a portfolio perspective helps maintain momentum: allocating the majority of resources to quick wins provides evidence for accurate assessment whilst addressing foundational gaps that the illusion obscures.

From diagnosis to coherent actions: building true maturity

When diagnostics reveal governance gaps, the priority action is establishing Board-level AI oversight through an AI Centre of Excellence (AI CoE). This addresses not just policy gaps but the lack of coordination that allows multi-speed adoption to become ungoverned fragmentation. The AI CoE must enable rather than constrain and minimum lovable governance provides that oversight without bureaucratic burden, building the trust differential that distinguishes high-maturity organisations. Boards should resist the temptation to implement comprehensive governance frameworks prematurely; the goal is governance appropriate to actual maturity stage, not governance that signals maturity the organisation has not achieved.

When diagnostics reveal infrastructure deficits, Boards face a nuanced challenge. I’ve previously argued against obsessing over perfect data as a prerequisite for AI progress — and that guidance stands. The mirage risk, however, runs in the opposite direction: assuming data infrastructure is adequate because tools appear to function. The diagnostic question is not whether data is perfect, but whether it is fit for purpose at the scale the organisation believes it has achieved. Organisations discovering significant gaps between tool deployment and data readiness should address infrastructure coherence before expanding their tool portfolio further. The goal is not perfection but alignment — ensuring data foundations can support the maturity stage the organisation actually occupies rather than the stage it assumes.

When diagnostics reveal people and cultural barriers, targeted programmes must address both technical capability and cultural readiness. The goal is reducing the engagement-readiness divide where Boards discuss AI regularly but feel inadequately equipped for oversight. Training programmes should build Five Pillars literacy alongside technical skills, enabling the balanced capability development that sustains advancement through AISA stages. The investment in people capability often shows slower returns than tool deployment but creates the foundation for compound advantage that overestimating organisations miss.

When diagnostic mapping reveals fragmented multi-speed adoption without coordination, the hub-and-spoke model provides structural remedy. Centralised expertise supports distributed execution, maintaining governance consistency whilst enabling business-unit adaptation. Initial integration efforts should focus on two or three domains showing highest maturity potential, using these as demonstrators for broader capability building. This creates systematic pathways from pilot success to organisational capability rather than leaving each function to discover integration challenges independently.

For organisations discovering they lack robust impact tracking, implementing Well-Advised scorecards ensures value assessment across all five dimensions: Innovation, Customer Value, Operational Excellence, Responsible Transformation, and Revenue. This directly counteracts hype-driven metrics by requiring evidence across multiple value types rather than single-dimension success stories that reinforce comfortable overestimation.

Organisations piercing the mirage share common patterns. Those discovering through diagnostic assessment that perceived Optimising status actually reflects Experimenting capability — typically revealed through skills gaps and governance deficits — find that refocusing on people development before tool expansion accelerates genuine advancement. Others recognising that fragmented investments across departments created the appearance of progress implement minimum lovable governance to establish coordination, reducing scaling timelines toward sustainable quarterly cycles. The pattern across successful transitions is consistent: honest diagnosis enables targeted action that compounds over time.

From illusion to advantage

The AI maturity mirage is not inevitable — it is a symptom of incomplete assessment that accurate diagnosis can cure. The gap between tool adoption and measurable capability represents organisations yet to pierce the illusion, not some inherent limitation of AI transformation.

Boards applying AISA mapping and Five Pillars assessment reveal their true position—often discovering that perceived Optimising status reflects Experimenting capability, or that visible deployments mask fundamental governance gaps. This discovery, though initially uncomfortable, creates the foundation for targeted advancement rather than continued investment in comfortable illusions.

The organisations shaping AI transformation share a common characteristic: honest self-assessment that enables compound advantage rather than overestimation that delays it. When high-maturity organisations sustain value for three years or more whilst others stall at pilot stage, accurate diagnosis represents competitive strategy rather than pessimism.

The mirage will persist for organisations that mistake activity for capability. For those willing to look clearly, the path to genuine maturity becomes visible.

Let's Continue the Conversation

Thank you for reading about diagnosing the AI maturity mirage. I'd welcome hearing about your Board's experience assessing genuine AI capability - whether you're discovering gaps between tool adoption and measurable outcomes, navigating multi-speed adoption across different functions, or finding that honest self-assessment reveals a different picture than executive presentations suggest.




About the Author

Mario Thomas is a Chartered Director and Fellow of the Institute of Directors (IoD) with nearly three decades bridging software engineering, entrepreneurial leadership, and enterprise transformation. As Head of Applied AI & Emerging Technology Strategy at Amazon Web Services (AWS), he defines how AWS equips its global field organisation and clients to accelerate AI adoption and prepare for continuous technological disruption.

An alumnus of the London School of Economics and guest lecturer on the LSE Data Science & AI for Executives programme, Mario partners with Boards and executive teams to build the knowledge, skills, and behaviours needed to scale advanced technologies responsibly. His independently authored frameworks — including the AI Stages of Adoption (AISA), Five Pillars of AI Capability, and Well-Advised — are adopted internationally in enterprise engagements and cited by professional bodies advancing responsible AI adoption, including the IoD.

Mario's work has enabled organisations to move AI from experimentation to enterprise-scale impact, generating measurable business value through systematic governance and strategic adoption of AI, data, and cloud technologies.