Cookie Consent

I use cookies to understand how my website is used. This data is collected and processed directly by me, not shared with any third parties, and helps us improve our services. See my privacy and cookie policies for more details.

AI Centre of Excellence: Designing Structure for Multi-Speed Governance

Llantwit Major | Published in AI and Board | 12 minute read |    
A modern glass-walled boardroom showing an organisational chart on a large screen. The chart displays a hub-and-spoke AI governance model with a central AI CoE connected to various business units at different stages of AI maturity, represented by different colours and connection strengths. Executives are gathered around the table reviewing the structure. (Image generated by ChatGPT 4o).

Through our journey so far, we’ve built a comprehensive understanding of AI governance needs. We’ve seen why boards face an unprecedented challenge with millions of AI decisions per second, mapped the eighteen critical functions every AI CoE must fulfil, and discovered through the AI CoE Simulator how different parts of organisations naturally progress at different speeds. This foundation brings us to perhaps the most practical challenge: designing an organisational structure capable of governing AI initiatives that range from shadow experiments to enterprise transformations - all happening simultaneously.

The answer isn’t as straightforward as creating an organisational chart. Unlike traditional IT governance that assumes relatively uniform technology adoption, your AI CoE must simultaneously oversee experimental chatbot pilots, production-scale predictive maintenance systems, and everything in between. It needs to guide business functions just beginning to observe AI’s potential whilst governing others that are transforming their entire operating models.

This structural challenge becomes even more complex when you consider that 88% of AI pilots fail to reach production. Many failures stem not from technical issues but from governance structures that either stifle innovation with excessive control or enable chaos through insufficient oversight. The key is designing a structure that adapts and provides appropriate governance for each stage of AI maturity whilst maintaining coherent Board-level oversight.

The Structural Challenge of Multi-Speed Governance

If you’ve used the AI CoE Simulator from last week’s article, you’ve likely discovered the paradox in AI adoption; Your marketing team might be transforming customer engagement with sophisticated AI whilst your finance department remains firmly on the sidelines observing, wary of AI’s implications for audit and compliance. Meanwhile, shadow AI proliferates as employees independently experiment with consumer tools, creating ungoverned risk.

Traditional governance structures fail in this environment because they assume uniformity. They’re designed for scenarios where the entire organisation moves through change at roughly the same pace, with IT leading and business units following. AI shatters this assumption. When your customer service team can implement a chatbot in weeks whilst your manufacturing AI initiative requires months of development, one-size-fits-all governance becomes either a stranglehold or a sieve.

This multi-speed reality demands a fundamentally different approach to structure. Your AI CoE can’t be a monolithic entity applying uniform governance. Instead, it must be an adaptive system capable of providing appropriate oversight and support to initiatives at every stage of maturity.

Core Design Principles for Adaptive Governance

Before diving into specific structures, let’s establish the principles that should guide your AI CoE design. These principles ensure your structure can handle the full spectrum of AI adoption whilst maintaining necessary oversight.

The Hub-and-Spoke Model: A Foundation for Multi-Speed Governance

Based on my early work at AWS designing Cloud Centres of Excellence (CCoE) for customers across industries, the hub-and-spoke model provides the best foundation for managing multi-speed AI adoption. This isn’t a rigid prescription but rather a flexible framework you can adapt to your organisation’s specific needs.

The Central Hub: Your Core AI CoE

The hub serves as the nerve centre for AI governance, providing consistency and oversight whilst avoiding the bottleneck trap. Key responsibilities of the central hub include:

The Distributed Spokes: Embedded AI Governance

The spokes extend AI governance into business units, providing local support whilst maintaining connection to central standards. Each major business unit or function should have an embedded AI governance presence, scaled appropriately to their AI maturity and ambitions.

For functions at the Experimenting stage, this might be a single AI champion who dedicates part of their time to AI governance whilst maintaining their regular role. As functions progress to Adopting and beyond, dedicated AI governance resources become necessary.

Key responsibilities of the spokes include:

Staffing Your AI CoE: Roles That Scale

The effectiveness of your AI CoE structure depends entirely on having the right people in the right roles. However, staffing needs evolve significantly as your organisation progresses through the AISA stages. Here’s how to think about staffing from inception through maturity.

Core Roles from Day One

Regardless of your organisation’s AI maturity, certain roles are essential from the moment you establish your AI CoE:

AI CoE Director

This role requires a unique combination of skills: technical understanding sufficient to engage with data scientists and engineers, business acumen to translate AI capabilities into strategic value, and governance expertise to manage risk without stifling innovation. Most critically, they need the gravitas and communication skills to interact effectively with board members.

The AI CoE Director reports directly to the Board’s risk committee, not through IT or another function. This positioning is crucial for maintaining independence and ensuring appropriate visibility for AI governance.

Governance Lead

Whilst the Director provides strategic oversight, the Governance Lead operationalises AI governance daily. They develop and maintain governance frameworks, coordinate risk assessments, and ensure compliance with both internal policies and external regulations. As AI regulations like the EU AI Act come into force, this role becomes even more critical.

Technical Architecture Lead

This role ensures AI initiatives build on solid technical foundations. They don’t need to be the deepest technical expert - that’s what your data scientists are for - but they must understand AI architecture well enough to identify risks and opportunities. They establish technical standards that ensure AI systems can scale, integrate, and operate reliably.

Value Realisation Lead

Too many AI initiatives fail because they never translate technical success into business value. The Value Realisation Lead ensures every AI initiative has clear business outcomes and tracks progress toward them. They work closely with business units to identify opportunities and measure impact across all Well-Advised dimensions.

Change Management Lead

AI transformation is ultimately about people, not technology. The Change Management Lead develops programmes that help employees adapt to AI-augmented work, addresses concerns about job displacement, and builds enthusiasm for AI’s possibilities. Without effective change management, even technically perfect AI implementations fail.

Evolving Staffing Models

As your organisation progresses through AISA stages, your staffing model must evolve:

Experimenting to Adopting Transition Initially, these core roles might be part-time assignments for existing staff. As experimentation increases, dedicated resources become necessary. You’ll also need to identify and train AI champions in each business unit - enthusiasts who can promote responsible AI adoption within their areas.

Adopting to Optimising Evolution At these stages, your AI CoE expands significantly. Specialist roles emerge: MLOps engineers to manage model lifecycles, bias auditors to ensure fairness, and vendor managers to handle the growing ecosystem of AI suppliers. Business units at these stages need dedicated AI governance resources, not just champions.

Transforming to Scaling Maturity Organisations at these advanced stages need AI CoE structures that match their ambitions. This might include research teams exploring cutting-edge AI capabilities, partnership managers coordinating ecosystem initiatives, and education teams developing AI curricula for the entire workforce.

Governance Mechanisms by AISA Stage

Your AI CoE structure must deploy different governance mechanisms for initiatives at different AISA stages. This differentiated approach ensures appropriate oversight without creating unnecessary friction.

AISA StageGovernance FocusKey MechanismsPrimary Approach
ExperimentersEnablement and risk awarenessDiscovery & Guidance: AI awareness sessions; Lightweight documentation templates; Simple risk checklists; Regular “office hours”. Shadow AI Management: Amnesty programmes for unauthorised AI; Clear channels for approved tools; Risk education; Gradual transition to sanctioned AIBuilding trust and capability whilst preventing major risks
AdoptersActive oversight with enabling mindsetFormal Frameworks: Comprehensive risk assessments; Clear approval workflows; Documented roles & responsibilities; Regular governance reviews. Quality Assurance: Pre-deployment testing; Performance monitoring; Bias assessments; Incident response proceduresShifting from guidance to active oversight
OptimisersSophisticated continuous improvementAdvanced Monitoring: Real-time dashboards; Automated drift detection; Continuous compliance monitoring; Proactive risk identification. Value Tracking: ROI measurement; Cross-functional impact assessment; Strategic alignment reviews; Innovation pipeline managementAutomated, data-driven governance for deeper insights
Transformers & ScalersStrategic impact and ecosystem coordinationStrategic Governance: Board-level reviews; Industry standards participation; Ecosystem partnerships; IP management. Innovation Support: Research protocols; Regulatory engagement; Knowledge sharing frameworks; Talent developmentShaping external environment alongside internal management

The goal is to match governance intensity to maturity level - from light-touch enablement for experimenters to strategic ecosystem governance for the most advanced initiatives.

Organisational Models: Choosing Your Structure

Whilst the hub-and-spoke model provides a strong foundation, organisations can implement it in different ways. Here are four models I’ve seen work effectively:

ModelDescriptionKey AdvantagesKey DisadvantagesBest Suited For
Centralised ExcellenceAll AI governance expertise resides in a central AI CoE, with business units receiving support through assigned liaisonsClear accountability and consistent standards; Efficient use of scarce expertise; Strong risk control; Easier to establishCan become a bottleneck; May lack business context; Risk of being seen as “governance police”; Difficult to scaleOrganisations in early AISA stages with limited AI activity, or those in highly regulated industries
Federated PartnershipHub-and-spoke model with strong local presence in each major business unit, coordinated centrallyBalances consistency with local relevance; Scales effectively; Deep business understanding; Faster decision-makingRequires more resources; Risk of inconsistency; Needs strong coordination; Can create competing prioritiesLarge organisations with diverse business units at different AISA stages
Distributed EmbeddingAI governance fully embedded within business units, with minimal central coordinationMaximum business alignment; Fastest decision-making; Deep contextual understanding; High business ownershipRisk of inconsistent standards; Difficult to share learnings; Potential governance gaps; Challenging Board oversightHighly decentralised organisations with strong existing governance cultures
Evolutionary HybridExplicitly evolves as the organisation matures, starting centralised and becoming more federated over timeMatches governance to maturity; Efficient resource utilisation; Builds capability systematically; Manages risk appropriatelyRequires careful change management; Can create uncertainty during transitions; Needs clear evolution triggers; Complex to design initiallyMost organisations, as it provides flexibility to adapt as AI adoption evolves

Integration Points: Connecting Your AI CoE

Your AI CoE doesn’t operate in isolation. Its effectiveness depends on how well it integrates with existing organisational structures and external stakeholders.

Integration PointKey AreasActivities
Board & Risk CommitteeRegular Reporting CadencesMonthly operational updates; Quarterly strategic reviews; Immediate escalation protocols; Annual comprehensive assessments
Clear Communication ProtocolsExecutive dashboards translating technical metrics; Risk heat maps; Strategic opportunity assessments; Competitive intelligence
IT & Cloud CoEClear Delineation of ResponsibilitiesCloud CoE: Infrastructure, platforms, technical standards. AI CoE: AI governance, use cases, value realisation. Joint: Architecture, security, data governance
Collaboration MechanismsJoint planning sessions; Shared technology roadmaps; Coordinated vendor management; Integrated training programmes
Business Units by AISA StageObserving/Experimenting UnitsEducational workshops; Lightweight consulting; Safe experimentation spaces; Success story sharing
Adopting/Optimising UnitsDedicated governance resources; Regular review cycles; Capability building; Performance optimisation
Transforming/Scaling UnitsStrategic partnerships; Innovation co-creation; Ecosystem coordination; Thought leadership
External StakeholdersRegulatory EngagementProactive regulator dialogue; Industry standards participation; Compliance monitoring; Policy influence
Vendor & Partner EcosystemVendor assessment; Partnership governance; IP management; Innovation collaboration
Customer & Public RelationsTransparency initiatives; Trust-building programmes; Ethical AI communications; Incident response

This integrated approach ensures your AI CoE maintains effective connections across all critical touchpoints, from Board oversight to external stakeholder management.

Practical Implementation: From Design to Reality

Designing your AI CoE structure is just the beginning. Successful implementation requires a pragmatic approach that builds momentum whilst establishing necessary foundations.

Start with Minimum Lovable Governance

Resist the temptation to build a complete AI CoE structure from day one. Instead:

  1. Appoint the AI CoE Director and establish board reporting lines
  2. Create basic governance frameworks for immediate risks
  3. Identify AI champions in each major business unit
  4. Launch 2-3 pilot governance processes to test and refine
  5. Gather feedback and iterate based on real experience

This minimum viable structure allows you to begin governing AI initiatives whilst learning what your organisation actually needs.

Build Based on Assessed Needs

Use the insights from your Week 3 assessment to prioritise capability building:

Let actual needs drive structure evolution, not theoretical models.

Create Clear RACI Matrices

For each of the eighteen AI CoE functions, establish clear accountability:

This clarity prevents both gaps and overlaps in governance coverage.

Establish Regular Operating Rhythms

Different governance needs require different cadences:

These rhythms create predictability whilst maintaining responsiveness.

Common Pitfalls and How to Avoid Them

In my day-to-day work, I’ve observed recurring patterns of failure. Here’s how to avoid them:

Pitfall 1: Over-engineering from the Start Creating elaborate structures before understanding actual needs wastes resources and creates bureaucracy. Start simple and evolve based on experience.

Pitfall 2: Underestimating Cultural Change Focusing solely on structure whilst ignoring the human element leads to resistance and failure. Invest equally in change management and communication.

Pitfall 3: Weak Board Connection Positioning the AI CoE too low in the organisation limits its effectiveness. Ensure direct Board reporting from day one.

Pitfall 4: One-Size-Fits-All Governance Applying the same governance to all AI initiatives regardless of maturity stifles innovation. Build in appropriate flexibility.

Pitfall 5: Isolation from Business Creating an AI CoE that becomes an ivory tower disconnected from business realities. Maintain strong business embedding.

Your Path Forward

As you design your AI CoE structure, remember that perfect is the enemy of good. The most elegant organisational chart means nothing if it doesn’t enable responsible AI innovation whilst managing real risks.

Start by revisiting your Week 3 assessment results. Where are your different functions on their AI journey? What governance challenges does this multi-speed reality create? Which of the structural models best fits your organisational culture and AI ambitions?

Then take pragmatic first steps. Appoint your AI CoE Director. Establish board reporting lines. Create basic frameworks. Identify champions. Launch pilots. Learn and iterate.

Next week, we’ll explore how to build essential capabilities using the Five Pillars framework. With your structure in place, you’ll be ready to systematically develop the competencies needed for each stage of your AI journey.

Remember: your AI CoE structure should enable AI adoption, not constrain it. Design for the multi-speed reality you have, not the uniform journey you might wish for. Build in evolution from the start. And always maintain that crucial connection to Board-level oversight that ensures responsible innovation at scale.

The question isn’t whether you need an AI CoE structure - it’s how quickly you can build one that matches your multi-speed reality whilst maintaining coherent governance. The clock is ticking, and every day without proper structure is another day of ungoverned risk or missed opportunity.

Let's Continue the Conversation

I hope this article has helped you think about how to structure your AI Centre of Excellence for multi-speed governance. If you'd like to discuss your specific organisational context and structural options, I welcome the opportunity to exchange ideas.




About the Author

Mario Thomas is a transformational business leader with nearly three decades of experience driving operational excellence and revenue growth across global enterprises. As Head of Global Training and Press Spokesperson at Amazon Web Services (AWS), he leads worldwide enablement delivery and operations for one of technology's largest sales forces during a pivotal era of AI innovation. A Chartered Director and Fellow of the Institute of Directors, and an alumnus of the London School of Economics, Mario partners with Boards and C-suite leaders to deliver measurable business outcomes through strategic transformation. His frameworks and methodologies have generated over two-billion dollars in enterprise value through the effective adoption of AI, data, and cloud technologies.