AI Centre of Excellence: Mapping Your Multi-Speed AI Reality

Most organisations struggle to accurately assess their AI maturity because traditional approaches assume uniform progress across the business. The reality is far more complex - and the AI CoE Simulator, which I’m sharing today, helps reveal this multi-speed situation that boards need to understand.
The CEO declares confidently at a Board meeting: “We’re well into our AI transformation journey.” The head of marketing nods enthusiastically - his team has been using AI for personalised campaigns for eighteen months. The CFO looks sceptical, her department is still debating whether AI is relevant to finance. Meanwhile, operations has been quietly running predictive maintenance pilots, HR hasn’t started exploring AI, and somewhere in the organisation, dozens of employees are using ChatGPT for tasks no one knows about. This is the multi-speed reality of AI adoption.
The Assumption of Uniform Progress
When I developed the AI Stages of Adoption (AISA), I deliberately moved away from the traditional assumption that organisations progress uniformly through technology adoption. Unlike cloud computing, which has typically followed a sequential adoption path led by IT, AI adoption happens simultaneously across different functions, at different speeds, with different levels of maturity.
While AISA shares some DNA with the cloud-focused Migration Readiness Assessment (MRA) I co-authored at AWS, it addresses a fundamentally different challenge. Where the MRA helps organisations understand their readiness for a largely IT-led cloud journey, AISA recognises that AI adoption is business-led, multi-directional, and happens at varying speeds across different functions. The MRA could meaningfully assess an organisation’s overall cloud readiness; with AI, there is no single readiness score - only a complex map of different maturities across the business.
This multi-speed reality isn’t a failure of coordination, it’s the natural state of AI adoption. Yet most Boards think of their organisation as being at a single stage of AI maturity. We speak of being “AI-ready” or “digitally transformed” as if these were binary states rather than complex, multifaceted realities.
This gap between perception and reality has real consequences. According to CIO.com’s research, 88% of AI pilots fail to reach production and The Economist reported just last month that 42% of companies are now abandoning their generative AI projects entirely. While there are many contributing factors, misunderstanding organisational readiness by assuming uniform capability when reality is dramatically different ranks among the primary causes.
This mirrors what I saw during the early cloud adoption era at AWS. Organisations would declare themselves “cloud-first” while most of their workloads remained on-premises. The difference with AI is that this disconnect is harder to spot because AI adoption is distributed across the business rather than centralised in IT.
Why Traditional Assessment Fails for AI
Traditional maturity assessments worked well for sequential technology adoption. You could meaningfully say “we’re at stage 3 of our ERP implementation” or “we’re in the project phase of cloud adoption.” These assessments assumed a single journey with common milestones.
AI breaks this model fundamentally. As I outlined in my original AISA framework, different parts of your organisation naturally exist at different stages simultaneously. This isn’t a bug - it’s a feature of how AI spreads through organisations.
Consider how this plays out in practice. Marketing teams often lead AI adoption because tools like generative AI for content creation offer immediate, visible value with relatively low risk. They can experiment with ChatGPT or Claude without needing massive infrastructure changes, governance frameworks or purchase orders. Meanwhile, your finance team faces a different reality. They’re dealing with regulated processes, audit requirements, and the need for explainable decisions. Their caution isn’t resistance - it’s appropriate risk management.
This pattern - enthusiastic adoption in some areas, careful consideration in others - creates clusters of AI maturity in a landscape of varying readiness. Traditional assessments that assume uniform progress completely miss this distributed reality.
The Hidden Complexity of Shadow AI
One of the most significant discoveries organisations make when properly assessing their AI landscape is the extent of shadow AI. In my article on building an AI Centre of Excellence, I discussed how shadow AI presents unique governance challenges. But it’s during assessment that the true scale becomes apparent.
The proliferation of accessible AI tools has created a situation unlike anything we saw with shadow IT. When employees needed unauthorised software in the past, they typically had to install something or sign up for a service. With AI, they’re often using tools already available - from ChatGPT to Copilot to countless other services often on unmanaged devices.
This creates assessment challenges at multiple levels. First, you have the detection problem: how do you even know what’s being used? Unlike traditional software that appears in network logs or expense reports, AI usage can be virtually invisible. Second, you have the risk assessment problem: employees using AI for customer communications, strategic planning, or sensitive data analysis may be creating risks the board isn’t even aware exist.
Discovering Your True Position
Understanding where your organisation truly stands requires a systematic approach that goes beyond surveys or self-assessment. This is where AISA proves its value. Over the past year, I’ve been building a software toolkit for use by Boards to help them govern AI use in their organisations.
One of those tools I’ve developed, and which I’ve simplified and now published publicly on my website - the AI CoE Simulator - operationalises AISA into a practical assessment tool. Rather than asking subjective questions like “How mature is your AI adoption?”, it uses specific criteria to objectively place each function within the AISA stages. In this short video, I demonstrate how the simulator works:
It starts by acknowledging a fundamental truth: most organisations begin “observing” AI - not even on the formal AISA journey yet. This isn’t a failing; it’s reality. Organisations that are aware of AI but haven’t taken meaningful action are observing. They’re watching, learning, perhaps worried about being left behind, but not yet experimenting in any structured way.
When you begin the assessment, resist the temptation to immediately place your organisation at Experimenting or beyond. The simulator forces this honesty by requiring you to actively choose to move beyond Observing. This hurdle is important - it prevents the grade inflation that plagues most self-assessments.
As you evaluate each function, the simulator presents specific characteristics and indicators for each stage. For instance, Experimenting has clear markers: individual departments exploring AI independently, no formal AI strategy or governance framework, limited or ad hoc budget allocation, and heavy reliance on third-party AI solutions. If your marketing team has been running ChatGPT experiments without formal governance or budget allocation, they’re Experimenting - even if those experiments have been running for months.
The Transition Reality Check
One of the most valuable aspects of structured assessment is understanding readiness for transition between stages. Moving from Experimenting to Adopting isn’t just a matter of time or investment - it requires meeting specific criteria.
The simulator presents these as mandatory and recommended criteria. Mandatory criteria are non-negotiable; without them, progression is risky or impossible. Recommended criteria smooth the transition but aren’t absolute requirements. This distinction helps boards understand where to focus effort.
For example, transitioning from Experimenting to Adopting requires mandatory elements like executive sponsorship for AI initiatives, initial governance frameworks, and dedicated budget allocation. These aren’t arbitrary requirements - they’re based on patterns of what enables successful progression versus what leads to stalled or worse, abandoned initiatives.
By clicking through each criterion to mark it as met, partially met, or not met, you build a clear picture of readiness. A function showing 30% readiness has significant gaps to address. One showing 80% readiness might be ready to progress with focused effort on the remaining gaps.
Understanding Your Five Pillars Maturity
Assessment isn’t just about AISA stages - it’s about understanding capability maturity across the Five Pillars I detailed in the previous article. The simulator shows how each pillar’s maturity naturally varies based on your current stage.
At the Experimenting stage, you’d expect to see low maturity across most pillars - perhaps 20-30%. That’s appropriate. The danger comes when organisations at early stages think they need 80% maturity across all pillars. This perfectionism paralysis prevents progress.
The multi-speed toggle in the simulator reveals another crucial insight: even within a single stage, pillar maturity varies. Your Governance & Accountability might lag behind Technical Infrastructure, or your People, Culture & Adoption might be more advanced than your Value Realisation capabilities. This variation is normal and informative.
From Pattern Recognition to Strategic Insight
As you work through the assessment, patterns emerge that tell important stories. Functions with strong technical capabilities but weak governance are prime candidates for risk exposure. Areas with high governance maturity but low technical infrastructure might be over-controlling innovation. These patterns inform where your AI CoE needs to focus.
The simulator’s use case examples for each stage help validate your assessment. If the use cases ring true - if you recognise your own initiatives in them - you’ve likely placed yourself correctly. If they seem too advanced or too basic, reassess your positioning.
Particularly valuable are the action recommendations provided for each stage. These aren’t generic “best practices” but stage-specific actions. For Observing organisations, quick wins might include AI awareness sessions and identifying first pilot opportunities. For those at Optimising, strategic initiatives might focus on MLOps implementation and cross-functional scaling.
Armed with this comprehensive assessment; the patterns identified, the transition readiness calculated, and the stage-specific actions outlined, you transform how AI is discussed at board level. The conversation shifts from opinion-based assertions (“I think we’re doing well with AI”) to evidence-based analysis (“Here’s our objective assessment across all functions”). When you can show marketing at Transforming while HR remains at Observing, the multi-speed reality becomes undeniable and the path forward becomes clearer.
Beyond Assessment to Understanding
The value of proper assessment extends beyond simply knowing where you are. It provides crucial insights for governance design, investment prioritisation, and capability building.
Understanding that you have functions at different stages shapes how you structure your AI CoE. You need governance mechanisms that can handle both experimental pilots and production-scale AI simultaneously. You need investment approaches that support both quick wins and strategic capability building. You need talent strategies that develop skills for current needs while building for future stages.
Most importantly, proper assessment addresses the natural assumption of uniform progress. It replaces “we need to accelerate AI adoption” with nuanced understanding: which functions should progress faster, where governance gaps create unacceptable risk, how to channel rather than suppress shadow AI, and why some functions appropriately remain at earlier stages.
Your Assessment Journey
Before next week’s article on designing your AI CoE structure, you need to understand what you’re designing it to govern. Use the AI CoE Simulator to map your organisation’s true AI landscape.
Start with radical honesty - most organisations discover they’re earlier in the journey than they thought. Work through each function independently, acknowledging that multi-speed adoption is natural. Use the transition criteria to understand readiness for progression.
The goal isn’t to achieve uniform maturity across all functions. It’s to understand your multi-speed reality so you can govern it effectively. That governance design is what we’ll explore next week.
Let's Continue the Conversation
I hope this article and the AI CoE Simulator have helped you understand the importance of assessing your organisation's true AI maturity across different functions. If you'd like to discuss your assessment results or explore how to address the multi-speed reality in your organisation, I welcome the opportunity to exchange ideas.
About the Author
Mario Thomas is a transformational business leader with nearly three decades of experience driving operational excellence and revenue growth across global enterprises. As Head of Global Training and Press Spokesperson at Amazon Web Services (AWS), he leads worldwide enablement delivery and operations for one of technology's largest sales forces during a pivotal era of AI innovation. A Chartered Director and Fellow of the Institute of Directors, and an alumnus of the London School of Economics, Mario partners with Boards and C-suite leaders to deliver measurable business outcomes through strategic transformation. His frameworks and methodologies have generated over two-billion dollars in enterprise value through the effective adoption of AI, data, and cloud technologies.