AI Centre of Excellence: Designing Structure for Multi-Speed Governance

Through our journey so far, we’ve built a comprehensive understanding of AI governance needs. We’ve seen why boards face an unprecedented challenge with millions of AI decisions per second, mapped the eighteen critical functions every AI CoE must fulfil, and discovered through the AI CoE Simulator how different parts of organisations naturally progress at different speeds. This foundation brings us to perhaps the most practical challenge: designing an organisational structure capable of governing AI initiatives that range from shadow experiments to enterprise transformations - all happening simultaneously.
The answer isn’t as straightforward as creating an organisational chart. Unlike traditional IT governance that assumes relatively uniform technology adoption, your AI CoE must simultaneously oversee experimental chatbot pilots, production-scale predictive maintenance systems, and everything in between. It needs to guide business functions just beginning to observe AI’s potential whilst governing others that are transforming their entire operating models.
This structural challenge becomes even more complex when you consider that 88% of AI pilots fail to reach production. Many failures stem not from technical issues but from governance structures that either stifle innovation with excessive control or enable chaos through insufficient oversight. The key is designing a structure that adapts and provides appropriate governance for each stage of AI maturity whilst maintaining coherent Board-level oversight.
The Structural Challenge of Multi-Speed Governance
If you’ve used the AI CoE Simulator from last week’s article, you’ve likely discovered the paradox in AI adoption; Your marketing team might be transforming customer engagement with sophisticated AI whilst your finance department remains firmly on the sidelines observing, wary of AI’s implications for audit and compliance. Meanwhile, shadow AI proliferates as employees independently experiment with consumer tools, creating ungoverned risk.
Traditional governance structures fail in this environment because they assume uniformity. They’re designed for scenarios where the entire organisation moves through change at roughly the same pace, with IT leading and business units following. AI shatters this assumption. When your customer service team can implement a chatbot in weeks whilst your manufacturing AI initiative requires months of development, one-size-fits-all governance becomes either a stranglehold or a sieve.
This multi-speed reality demands a fundamentally different approach to structure. Your AI CoE can’t be a monolithic entity applying uniform governance. Instead, it must be an adaptive system capable of providing appropriate oversight and support to initiatives at every stage of maturity.
Core Design Principles for Adaptive Governance
Before diving into specific structures, let’s establish the principles that should guide your AI CoE design. These principles ensure your structure can handle the full spectrum of AI adoption whilst maintaining necessary oversight.
- Principle 1: Governance Intensity Must Match Maturity - Your AI CoE structure should apply different levels of governance based on an initiative’s stage and risk profile. Experimenting with customer sentiment analysis requires a lighter touch than deploying AI for credit decisions. This doesn’t mean early-stage initiatives escape governance - rather, the governance focus shifts from control to enablement and risk awareness.
- Principle 2: Federated Execution with Centralised Standards - Whilst standards, frameworks, and oversight must be centralised for consistency, execution should be as close to the business as possible. This federation ensures governance doesn’t become a bottleneck whilst maintaining necessary controls. Think of it as “loose-tight” - loose on implementation details, tight on principles and standards.
- Principle 3: Clear Escalation Paths to the Board - As I’ve emphasised throughout this series and in prior articles, your AI CoE must report directly to the Board’s risk committee. This isn’t about bureaucracy - it’s about ensuring appropriate visibility for decisions that could impact millions of customers in milliseconds. Your structure needs clear escalation triggers and paths that don’t require navigating complex hierarchies during crises.
- Principle 4: Built-in Evolution Capability - Your AI CoE structure can’t be static. As different parts of your organisation progress through the AI Stages of Adoption (AISA), the structure must evolve to provide appropriate support. Design with evolution in mind - what works for an organisation with most functions at Experimenting won’t serve one with multiple areas at Transforming.
- Principle 5: Innovation Enablement, Not Innovation Theatre - The structure should accelerate responsible AI adoption, not create elaborate processes that simulate progress whilst achieving nothing. Every element should have a clear purpose in either enabling innovation or managing risk - preferably both.
The Hub-and-Spoke Model: A Foundation for Multi-Speed Governance
Based on my early work at AWS designing Cloud Centres of Excellence (CCoE) for customers across industries, the hub-and-spoke model provides the best foundation for managing multi-speed AI adoption. This isn’t a rigid prescription but rather a flexible framework you can adapt to your organisation’s specific needs.
The Central Hub: Your Core AI CoE
The hub serves as the nerve centre for AI governance, providing consistency and oversight whilst avoiding the bottleneck trap. Key responsibilities of the central hub include:
- Standards and Frameworks Development - The hub creates and maintains governance frameworks that apply across all AI initiatives, regardless of stage. This includes ethical guidelines, risk assessment templates, and decision-making frameworks. Importantly, these standards should be principle-based rather than prescriptive, allowing appropriate flexibility for different maturity stages.
- Board-Level Reporting and Risk Management - With direct reporting to the Board’s risk committee, the hub ensures appropriate visibility for AI initiatives. This includes maintaining a comprehensive view of AI adoption across the organisation, identifying systemic risks that might emerge from the interaction of multiple AI systems, and providing regular updates on both opportunities and threats.
- Capability Building and Knowledge Management - The hub coordinates AI capability development across the organisation, ensuring lessons learned in one area benefit others. This includes developing training programmes, maintaining repositories of best practices, and facilitating knowledge sharing between teams at different stages of adoption.
- Strategic Coordination - As different parts of the organisation advance through AISA stages at different speeds, the hub ensures their efforts remain aligned with overall strategic objectives. This prevents the emergence of conflicting AI initiatives or duplicated efforts whilst identifying opportunities for synergy.
The Distributed Spokes: Embedded AI Governance
The spokes extend AI governance into business units, providing local support whilst maintaining connection to central standards. Each major business unit or function should have an embedded AI governance presence, scaled appropriately to their AI maturity and ambitions.
For functions at the Experimenting stage, this might be a single AI champion who dedicates part of their time to AI governance whilst maintaining their regular role. As functions progress to Adopting and beyond, dedicated AI governance resources become necessary.
Key responsibilities of the spokes include:
- Local Implementation Support - Spokes translate central standards into practical implementation within their business context. They understand both the AI CoE’s governance requirements and their business unit’s specific needs, serving as bridges between the two.
- Use Case Identification and Prioritisation - Being embedded in the business, spokes can identify AI opportunities that might be invisible to a centralised team. They can also assess which use cases align with both local needs and enterprise strategy.
- Change Management and Adoption - Spokes lead change management efforts within their areas, adapting enterprise-wide programmes to local contexts. They understand their colleagues’ concerns and can address them more effectively than distant corporate functions.
- Feedback and Continuous Improvement - Perhaps most importantly, spokes provide real-world feedback to the hub about what’s working and what isn’t. This feedback loop ensures governance frameworks evolve based on practical experience rather than theoretical models.
Staffing Your AI CoE: Roles That Scale
The effectiveness of your AI CoE structure depends entirely on having the right people in the right roles. However, staffing needs evolve significantly as your organisation progresses through the AISA stages. Here’s how to think about staffing from inception through maturity.
Core Roles from Day One
Regardless of your organisation’s AI maturity, certain roles are essential from the moment you establish your AI CoE:
AI CoE Director
This role requires a unique combination of skills: technical understanding sufficient to engage with data scientists and engineers, business acumen to translate AI capabilities into strategic value, and governance expertise to manage risk without stifling innovation. Most critically, they need the gravitas and communication skills to interact effectively with board members.
The AI CoE Director reports directly to the Board’s risk committee, not through IT or another function. This positioning is crucial for maintaining independence and ensuring appropriate visibility for AI governance.
Governance Lead
Whilst the Director provides strategic oversight, the Governance Lead operationalises AI governance daily. They develop and maintain governance frameworks, coordinate risk assessments, and ensure compliance with both internal policies and external regulations. As AI regulations like the EU AI Act come into force, this role becomes even more critical.
Technical Architecture Lead
This role ensures AI initiatives build on solid technical foundations. They don’t need to be the deepest technical expert - that’s what your data scientists are for - but they must understand AI architecture well enough to identify risks and opportunities. They establish technical standards that ensure AI systems can scale, integrate, and operate reliably.
Value Realisation Lead
Too many AI initiatives fail because they never translate technical success into business value. The Value Realisation Lead ensures every AI initiative has clear business outcomes and tracks progress toward them. They work closely with business units to identify opportunities and measure impact across all Well-Advised dimensions.
Change Management Lead
AI transformation is ultimately about people, not technology. The Change Management Lead develops programmes that help employees adapt to AI-augmented work, addresses concerns about job displacement, and builds enthusiasm for AI’s possibilities. Without effective change management, even technically perfect AI implementations fail.
Evolving Staffing Models
As your organisation progresses through AISA stages, your staffing model must evolve:
Experimenting to Adopting Transition Initially, these core roles might be part-time assignments for existing staff. As experimentation increases, dedicated resources become necessary. You’ll also need to identify and train AI champions in each business unit - enthusiasts who can promote responsible AI adoption within their areas.
Adopting to Optimising Evolution At these stages, your AI CoE expands significantly. Specialist roles emerge: MLOps engineers to manage model lifecycles, bias auditors to ensure fairness, and vendor managers to handle the growing ecosystem of AI suppliers. Business units at these stages need dedicated AI governance resources, not just champions.
Transforming to Scaling Maturity Organisations at these advanced stages need AI CoE structures that match their ambitions. This might include research teams exploring cutting-edge AI capabilities, partnership managers coordinating ecosystem initiatives, and education teams developing AI curricula for the entire workforce.
Governance Mechanisms by AISA Stage
Your AI CoE structure must deploy different governance mechanisms for initiatives at different AISA stages. This differentiated approach ensures appropriate oversight without creating unnecessary friction.
AISA Stage | Governance Focus | Key Mechanisms | Primary Approach |
---|---|---|---|
Experimenters | Enablement and risk awareness | Discovery & Guidance: AI awareness sessions; Lightweight documentation templates; Simple risk checklists; Regular “office hours”. Shadow AI Management: Amnesty programmes for unauthorised AI; Clear channels for approved tools; Risk education; Gradual transition to sanctioned AI | Building trust and capability whilst preventing major risks |
Adopters | Active oversight with enabling mindset | Formal Frameworks: Comprehensive risk assessments; Clear approval workflows; Documented roles & responsibilities; Regular governance reviews. Quality Assurance: Pre-deployment testing; Performance monitoring; Bias assessments; Incident response procedures | Shifting from guidance to active oversight |
Optimisers | Sophisticated continuous improvement | Advanced Monitoring: Real-time dashboards; Automated drift detection; Continuous compliance monitoring; Proactive risk identification. Value Tracking: ROI measurement; Cross-functional impact assessment; Strategic alignment reviews; Innovation pipeline management | Automated, data-driven governance for deeper insights |
Transformers & Scalers | Strategic impact and ecosystem coordination | Strategic Governance: Board-level reviews; Industry standards participation; Ecosystem partnerships; IP management. Innovation Support: Research protocols; Regulatory engagement; Knowledge sharing frameworks; Talent development | Shaping external environment alongside internal management |
The goal is to match governance intensity to maturity level - from light-touch enablement for experimenters to strategic ecosystem governance for the most advanced initiatives.
Organisational Models: Choosing Your Structure
Whilst the hub-and-spoke model provides a strong foundation, organisations can implement it in different ways. Here are four models I’ve seen work effectively:
Model | Description | Key Advantages | Key Disadvantages | Best Suited For |
---|---|---|---|---|
Centralised Excellence | All AI governance expertise resides in a central AI CoE, with business units receiving support through assigned liaisons | Clear accountability and consistent standards; Efficient use of scarce expertise; Strong risk control; Easier to establish | Can become a bottleneck; May lack business context; Risk of being seen as “governance police”; Difficult to scale | Organisations in early AISA stages with limited AI activity, or those in highly regulated industries |
Federated Partnership | Hub-and-spoke model with strong local presence in each major business unit, coordinated centrally | Balances consistency with local relevance; Scales effectively; Deep business understanding; Faster decision-making | Requires more resources; Risk of inconsistency; Needs strong coordination; Can create competing priorities | Large organisations with diverse business units at different AISA stages |
Distributed Embedding | AI governance fully embedded within business units, with minimal central coordination | Maximum business alignment; Fastest decision-making; Deep contextual understanding; High business ownership | Risk of inconsistent standards; Difficult to share learnings; Potential governance gaps; Challenging Board oversight | Highly decentralised organisations with strong existing governance cultures |
Evolutionary Hybrid | Explicitly evolves as the organisation matures, starting centralised and becoming more federated over time | Matches governance to maturity; Efficient resource utilisation; Builds capability systematically; Manages risk appropriately | Requires careful change management; Can create uncertainty during transitions; Needs clear evolution triggers; Complex to design initially | Most organisations, as it provides flexibility to adapt as AI adoption evolves |
Integration Points: Connecting Your AI CoE
Your AI CoE doesn’t operate in isolation. Its effectiveness depends on how well it integrates with existing organisational structures and external stakeholders.
Integration Point | Key Areas | Activities |
---|---|---|
Board & Risk Committee | Regular Reporting Cadences | Monthly operational updates; Quarterly strategic reviews; Immediate escalation protocols; Annual comprehensive assessments |
Clear Communication Protocols | Executive dashboards translating technical metrics; Risk heat maps; Strategic opportunity assessments; Competitive intelligence | |
IT & Cloud CoE | Clear Delineation of Responsibilities | Cloud CoE: Infrastructure, platforms, technical standards. AI CoE: AI governance, use cases, value realisation. Joint: Architecture, security, data governance |
Collaboration Mechanisms | Joint planning sessions; Shared technology roadmaps; Coordinated vendor management; Integrated training programmes | |
Business Units by AISA Stage | Observing/Experimenting Units | Educational workshops; Lightweight consulting; Safe experimentation spaces; Success story sharing |
Adopting/Optimising Units | Dedicated governance resources; Regular review cycles; Capability building; Performance optimisation | |
Transforming/Scaling Units | Strategic partnerships; Innovation co-creation; Ecosystem coordination; Thought leadership | |
External Stakeholders | Regulatory Engagement | Proactive regulator dialogue; Industry standards participation; Compliance monitoring; Policy influence |
Vendor & Partner Ecosystem | Vendor assessment; Partnership governance; IP management; Innovation collaboration | |
Customer & Public Relations | Transparency initiatives; Trust-building programmes; Ethical AI communications; Incident response |
This integrated approach ensures your AI CoE maintains effective connections across all critical touchpoints, from Board oversight to external stakeholder management.
Practical Implementation: From Design to Reality
Designing your AI CoE structure is just the beginning. Successful implementation requires a pragmatic approach that builds momentum whilst establishing necessary foundations.
Start with Minimum Lovable Governance
Resist the temptation to build a complete AI CoE structure from day one. Instead:
- Appoint the AI CoE Director and establish board reporting lines
- Create basic governance frameworks for immediate risks
- Identify AI champions in each major business unit
- Launch 2-3 pilot governance processes to test and refine
- Gather feedback and iterate based on real experience
This minimum viable structure allows you to begin governing AI initiatives whilst learning what your organisation actually needs.
Build Based on Assessed Needs
Use the insights from your Week 3 assessment to prioritise capability building:
- If you discovered extensive shadow AI, prioritise establishing approved alternatives
- If certain functions are advancing rapidly, assign dedicated governance resources
- If you lack technical foundations, strengthen the technical architecture role
- If value realisation is weak, focus on business case development
Let actual needs drive structure evolution, not theoretical models.
Create Clear RACI Matrices
For each of the eighteen AI CoE functions, establish clear accountability:
- Responsible: Who does the work
- Accountable: Who ensures it’s done properly
- Consulted: Who provides input
- Informed: Who needs to know
This clarity prevents both gaps and overlaps in governance coverage.
Establish Regular Operating Rhythms
Different governance needs require different cadences:
- Daily: Operational monitoring for production AI systems
- Weekly: Team coordination and issue resolution
- Monthly: Risk committee updates and governance reviews
- Quarterly: Strategic alignment and capability assessment
- Annually: Comprehensive governance framework review
These rhythms create predictability whilst maintaining responsiveness.
Common Pitfalls and How to Avoid Them
In my day-to-day work, I’ve observed recurring patterns of failure. Here’s how to avoid them:
Pitfall 1: Over-engineering from the Start Creating elaborate structures before understanding actual needs wastes resources and creates bureaucracy. Start simple and evolve based on experience.
Pitfall 2: Underestimating Cultural Change Focusing solely on structure whilst ignoring the human element leads to resistance and failure. Invest equally in change management and communication.
Pitfall 3: Weak Board Connection Positioning the AI CoE too low in the organisation limits its effectiveness. Ensure direct Board reporting from day one.
Pitfall 4: One-Size-Fits-All Governance Applying the same governance to all AI initiatives regardless of maturity stifles innovation. Build in appropriate flexibility.
Pitfall 5: Isolation from Business Creating an AI CoE that becomes an ivory tower disconnected from business realities. Maintain strong business embedding.
Your Path Forward
As you design your AI CoE structure, remember that perfect is the enemy of good. The most elegant organisational chart means nothing if it doesn’t enable responsible AI innovation whilst managing real risks.
Start by revisiting your Week 3 assessment results. Where are your different functions on their AI journey? What governance challenges does this multi-speed reality create? Which of the structural models best fits your organisational culture and AI ambitions?
Then take pragmatic first steps. Appoint your AI CoE Director. Establish board reporting lines. Create basic frameworks. Identify champions. Launch pilots. Learn and iterate.
Next week, we’ll explore how to build essential capabilities using the Five Pillars framework. With your structure in place, you’ll be ready to systematically develop the competencies needed for each stage of your AI journey.
Remember: your AI CoE structure should enable AI adoption, not constrain it. Design for the multi-speed reality you have, not the uniform journey you might wish for. Build in evolution from the start. And always maintain that crucial connection to Board-level oversight that ensures responsible innovation at scale.
The question isn’t whether you need an AI CoE structure - it’s how quickly you can build one that matches your multi-speed reality whilst maintaining coherent governance. The clock is ticking, and every day without proper structure is another day of ungoverned risk or missed opportunity.
Let's Continue the Conversation
I hope this article has helped you think about how to structure your AI Centre of Excellence for multi-speed governance. If you'd like to discuss your specific organisational context and structural options, I welcome the opportunity to exchange ideas.
About the Author
Mario Thomas is a transformational business leader with nearly three decades of experience driving operational excellence and revenue growth across global enterprises. As Head of Global Training and Press Spokesperson at Amazon Web Services (AWS), he leads worldwide enablement delivery and operations for one of technology's largest sales forces during a pivotal era of AI innovation. A Chartered Director and Fellow of the Institute of Directors, and an alumnus of the London School of Economics, Mario partners with Boards and C-suite leaders to deliver measurable business outcomes through strategic transformation. His frameworks and methodologies have generated over two-billion dollars in enterprise value through the effective adoption of AI, data, and cloud technologies.