From Shadow AI to Strategic Asset: Building Your AI Centre of Excellence

In my previous articles about the AI Stages of Adoption and the Five Pillars of AI maturity and capability, I briefly touched on the role of the AI Centre of Excellence (AI CoE). Since publishing those pieces, I’ve spoken with numerous Boards and business leaders about AI adoption and the importance of board-level AI governance. A recurring question emerges in almost every conversation: “What are the practical steps to establishing an AI CoE in our business?”
The challenge lies not just in creating another oversight function (Boards don’t want governance for the sake of it), but in building an accelerator for responsible AI innovation. An effective AI CoE must act as both enabler and guardian, helping organisations harness AI’s potential while managing its risks. This dual mandate requires thoughtful design and strategic positioning within your organisation.
Balancing Innovation with Governance
In my early days at AWS while working with three financial services customers in the early stages of cloud adoption, I developed the initial thought-leadership around establishing a Cloud CoE. As a highly regulated industry, financial services offered me an opportunity to see the impact governance could have on cloud adoption and ultimately the speed of the innovation engine at the heart of a businesses’ journey to the cloud.
Cloud CoEs are crucial for accelerating adoption by building comprehensive frameworks, establishing governance models, and driving organisational transformation within the technology domain. They provide essential structure for successful cloud implementation and play a vital role in digital transformation. However, their focus remains primarily within the technology landscape, even as they reach into aspects of business operations. AI CoEs, however, face a fundamentally different challenge.
AI extends far beyond technology implementation—it fundamentally transforms core business processes and decision-making across every department and function. AI systems can operate autonomously throughout the organisation, making them both more pervasive and more challenging to govern effectively than cloud infrastructure. This profound difference in scope and impact is why your AI CoE must sit alongside the Board, not within IT or your cloud CoE. When AI governance is delegated too far down the organisational hierarchy, it creates a dangerous disconnect between the Board’s accountability and the actual implementation of AI systems.
Core Functions of an AI Centre of Excellence
At its core, the AI CoE serves as the orchestrator of your organisation’s AI strategy, balancing governance with innovation acceleration. The reality in most businesses today is that AI adoption is already underway - often in disconnected pockets across the organisation - before any formal governance structure exists. This means your AI CoE can’t afford a lengthy setup phase where you meticulously design every process before acting. Instead, the most effective approach is iterative, building capability and responsibility as business needs emerge, starting with essential functions and expanding as your AI maturity grows.
In my conversations with organisations implementing AI governance, I’ve identified eighteen distinct areas of responsibility that an effective AI CoE must address. These areas align with the Five Pillars of AI capability I discussed in my previous article, providing a structured framework for developing AI maturity. However, when establishing a new AI CoE, organisations often find it overwhelming to tackle all these areas simultaneously.

The most successful AI CoEs start by focusing on core functions that provide immediate value and risk mitigation. They establish strong governance and accountability mechanisms, including frameworks for human-AI collaboration and safeguards against misuse. They build the necessary technical foundations, with pragmatic data governance that balances quality with speed to value—avoiding the trap of waiting for perfect data before moving forward. They implement clear value realisation processes that evaluate and prioritise business use cases based on potential returns. And critically, they invest in people and culture, understanding that technology alone cannot drive transformation without a workforce that embraces and effectively leverages AI capabilities.
The cultural dimension of AI adoption often determines success more than technical implementation. The most successful AI CoEs I’ve observed invest heavily in cultural enablement through carefully designed incentive structures and community building. Internal AI champions programmes that recognise and reward responsible AI innovation can create powerful peer-to-peer influence networks. Communities of practice that bring together technical and business teams create spaces for shared learning and collaborative problem-solving.
Some organisations have found success with “AI Dojos” - intensive learning environments where cross-functional teams work together on real business problems. Others implement recognition programmes specifically designed to highlight responsible AI use, not just technical innovation. The key is creating cultural mechanisms that reinforce the values your governance frameworks are designed to protect - treating cultural adoption not as a separate workstream but as an integral part of your governance approach.
This foundation creates a platform for growth, allowing the AI CoE to gradually expand its responsibilities across all capability pillars as the organisation’s AI maturity increases. Rather than trying to establish perfect governance from day one, successful AI CoEs evolve their capabilities in step with the organisation’s needs, providing just enough structure to enable safe innovation without creating unnecessary barriers.
Aligning with the AI Stages of Adoption
In my Five Pillars article, I outlined how organisations progress through the AI Stages of Adoption. Your AI CoE should evolve as your organisation advances through these stages, adapting its focus and structure to match your current level of AI maturity.
During the Experimenting stage, when organisations are just beginning their AI journey, the CoE should focus on enabling safe exploration and providing educational resources. The governance structures should be lightweight, designed to guide rather than restrict. The most effective approach at this stage is establishing minimal ethical guidelines and basic security requirements, creating boundaries within which teams can experiment freely without unnecessary constraints.
As you transition to the Adopting stage, your CoE needs to formalise its governance structures without creating bureaucratic bottlenecks. This means developing standardised processes for AI project approval and deployment while still maintaining the agility needed for innovation. The focus shifts from pure experimentation to establishing repeatable practices that can scale across multiple business units.
When advancing to the Optimising stage, the CoE’s emphasis should be on implementing sophisticated monitoring and quality control systems. This is when more detailed playbooks for technical teams become essential, and you should establish centres of expertise in key AI domains. The governance structure needs to mature from basic guidelines to comprehensive frameworks that address the complexities of specialised AI applications.
As your organisation moves into the Transforming stage, your CoE should advise leadership on organisational redesign to leverage AI capabilities fully. This includes developing advanced risk management frameworks for high-impact AI systems and guiding the integration of AI into core business processes. The governance becomes more sophisticated, focusing not just on individual AI applications but on how they interact with and transform your broader business ecosystem.
Finally, when reaching the Scaling stage, your CoE needs to extend its governance frameworks to encompass ecosystem partnerships. This includes developing protocols for data and model sharing, creating standards for AI interoperability, and advising board and executive leadership on strategic AI direction. The governance expands beyond your organisational boundaries to include partners, suppliers, and customers.
The key to success is recognising which stage your organisation is in and implementing governance structures appropriate to that level of maturity. Too much governance too early can stifle innovation; too little governance too late can create significant risks. The most successful AI CoEs I’ve seen have been deliberate about evolving their capabilities and focus areas in step with their organisation’s progression through these stages.
Building the Foundation: Team Structure
A carefully designed organisational structure is the cornerstone of an effective AI CoE. Unlike traditional technology governance functions that typically sit within IT, the AI CoE requires a unique configuration that reflects its cross-functional mandate and its significance to the board.
Your AI CoE leader must report directly to the board’s risk committee, not because of bureaucratic necessity, but because this reporting line ensures AI governance receives appropriate visibility and priority. This leader needs a rare combination of skills: technical understanding of AI capabilities and limitations, business acumen to evaluate strategic implications, and governance expertise to implement appropriate controls. I’ve found that the most effective AI CoE leaders have experience spanning technology implementation, business transformation, and risk management—a combination that enables them to speak credibly with all stakeholders.
The AI CoE shouldn’t operate in isolation from your existing enterprise risk management (ERM) frameworks. Rather than creating parallel risk structures, the most effective approach is integrating AI risks into your organisation’s established ERM processes. This means adapting your risk appetite statements to include AI-specific considerations, incorporating AI risks into your regular risk assessment cycles, and ensuring AI incidents feed into your broader incident management framework.
The distinction lies in how AI risks are identified and assessed, not in how they’re governed. AI introduces novel risk categories - from algorithmic bias to model drift - that require specialised expertise to identify, but once captured, these risks should flow through the same governance channels as other enterprise risks. This integration ensures AI doesn’t become a governance silo and allows your board to maintain a comprehensive view of organisational risk exposure.
The core team structure should evolve with your AI maturity, but even in the early stages, certain roles are essential. Beyond the expected technical experts—AI engineers, data scientists, and security specialists—you need bridge roles that translate between technical and business domains. Business value analysts assess and prioritise use cases based on potential returns. Change advocates work with business units to drive adoption and manage transformation. Learning specialists develop targeted training programs for different stakeholder groups.
What truly distinguishes an AI CoE from other technology governance functions is the need for specialised roles focused on the unique challenges of AI. An AI ethics specialist evaluates potential societal impacts and ensures alignment with organisational values. Model governance experts validate that AI systems perform reliably and produce explainable outcomes. Bias mitigation specialists continuously test for and address potential discrimination in AI systems. And data stewards work to ensure that the foundation of all AI - high-quality, well-governed data - maintains integrity while remaining accessible.
The most successful AI CoEs I’ve observed also incorporate cross-functional representatives from key business areas. These individuals serve as both ambassadors back to their departments and subject matter experts who ensure the CoE’s governance frameworks reflect practical business realities. This creates a two-way dialogue that avoids the common pitfall of creating governance structures that look good on paper but fail in practice.
Remember that your team structure should be designed to evolve. Start with essential roles that address immediate needs and expand the team as your AI maturity grows. The goal isn’t to build a massive bureaucracy but to create a lean, effective team that provides just enough governance to enable safe innovation while preventing significant risks.
Protecting Intellectual Property and Managing Shadow AI
In today’s environment of readily available AI tools, one of the CoE’s crucial functions is managing the balance between enablement and control. Shadow AI - the unauthorised use of AI tools across your organisation - presents a complex governance challenge that perfectly illustrates why AI requires board-level oversight.
The risks of unmanaged shadow AI extend far beyond traditional shadow IT concerns. When employees use public AI tools to process organisational data, they may inadvertently expose confidential information, create regulatory compliance gaps, or compromise intellectual property rights. There are numerous examples of where data were submitted to public AI models without any consideration of the terms of service that granted the AI provider rights to use that input data. Meanwhile, inconsistent use of different AI tools across departments can lead to varying quality standards and contradictory outputs that damage both internal efficiency and customer experience.
Yet shadow AI also offers valuable signals about unmet organisational needs. When employees turn to unauthorised tools, they’re often trying to solve legitimate business problems for which they lack approved solutions. Rather than viewing shadow AI purely as a governance failure, savvy organisations treat it as market research - revealing where existing tooling and capabilities fall short of user needs.
The key question boards must address is: who owns the intellectual property generated using AI models? When employees use public AI tools to create content, code, or business solutions, the ownership of that output can be unclear. The legal landscape remains unsettled, with different AI providers offering varying terms of service regarding input data and generated content. Your AI CoE must establish clear guidelines on what types of information can be shared with which AI tools, who owns the resulting outputs, and how those outputs should be validated before business use.
Rather than creating a list of prohibited tools, your CoE should focus on providing approved alternatives that meet both user needs and governance requirements. The most effective approach I’ve seen is creating a tiered framework of approved tools:
- Tier 1: Enterprise-grade AI systems with strong governance controls for handling sensitive business data and high-stakes decisions
- Tier 2: Vetted tools for general business use with appropriate data handling safeguards
- Tier 3: Designated public tools permitted for specific, non-sensitive use cases with clear guidelines
This tiered approach allows for appropriate matching of tools to use cases based on risk profiles, providing flexibility while maintaining control over critical areas. Discovering the full extent of shadow AI usage requires a methodical approach. Begin with anonymous surveys to understand what tools employees are currently using and why. Analyse network traffic and expense reports to identify AI services being accessed. Most importantly, conduct interviews focused on understanding user needs rather than assigning blame - what problems are they trying to solve, and why aren’t existing approved tools sufficient?
Once you’ve mapped the shadow AI landscape, implementing a structured transition programme becomes essential. The most successful approach I’ve observed is announcing a time-limited amnesty period. During this amnesty, employees can disclose their unauthorised AI use without fear of consequences, receive guidance on appropriate alternatives, and contribute to shaping the organisation’s official AI toolkit. This approach recognises that shadow AI often emerges not from malicious intent but from a genuine desire to improve productivity and innovation.
Throughout this transition, monitoring should focus on education and enablement rather than punishment. When unauthorised AI usage is detected, the response should be to understand the underlying need and provide appropriate alternatives rather than simply shutting down innovation. Your governance framework should establish clear boundaries while creating streamlined pathways for approving new tools when legitimate business needs aren’t met by existing options.
The shadow AI challenge perfectly illustrates how an effective AI CoE must span all Five Pillars of capability. It requires strong governance frameworks, technical infrastructure alternatives, operational monitoring, value assessment of use cases, and cultural change management. By approaching shadow AI with this comprehensive perspective, organisations can transform a potential governance risk into an innovation opportunity.
Measuring Success: Beyond Technical metrics
The effectiveness of an AI CoE must be measured through concrete outcomes. This starts with clear metrics across multiple dimensions: governance effectiveness, innovation impact, and capability development.
Governance metrics need to focus on both protection and enablement—a balance that reflects the AI CoE’s dual mandate. Policy compliance rates and risk incident frequency demonstrate control effectiveness, while response time metrics show the AI CoE’s ability to enable rapid innovation without creating bottlenecks. The most valuable governance metrics aren’t just those that count activities (like the number of reviews completed) but those that measure outcomes (like the reduction in AI-related incidents or policy violations).
Innovation impact metrics should track both the quantity and quality of AI initiatives. Beyond simply counting the number of projects in your pipeline, measure how many successfully transition from experimentation to production, and the magnitude of their business impact. Track time-to-value metrics to ensure your governance processes aren’t creating unnecessary delays and monitor adoption rates to ensure AI solutions are actually being used once deployed.
The most comprehensive measurement frameworks I’ve seen include balanced scorecards with four components: technical performance indicators, business value metrics, governance effectiveness measures, and capability development indicators. This approach ensures you’re building a sustainable AI capability rather than just implementing technology or focusing exclusively on risk management. By measuring across these dimensions, you create accountability for the AI CoE’s full mandate.
Remember that your measurement approach should evolve with your AI maturity. Early in your journey, focus on leading indicators like engagement with the AI CoE and pipeline development. As your organisation matures, shift toward lagging indicators like business value delivered and reduction in risk incidents.
Getting Started: A Pragmatic Approach
Implementing an AI CoE can seem daunting, but the key is to start small and scale over time. Begin with these concrete steps:
- Secure board-level sponsorship. This isn’t just about budget approval—it’s about establishing the strategic importance of AI governance and ensuring the AI CoE has the authority it needs to be effective. Brief your board on both the opportunities and risks AI presents and be explicit about the CoE’s reporting relationship to the risk committee.
- Appoint a CoE leader with the right blend of technical expertise and strategic vision. Look for someone who can speak credibly with both technical teams and board members, understands governance principles, and has experience managing complex, cross-functional initiatives. This leader will set the tone for the entire programme.
- Establish a minimal viable governance framework. Rather than attempting to address all 18 functional areas immediately, focus on the most critical elements: a simple approval process for AI initiatives, basic guidelines for responsible AI use, an audit of shadow AI, and clear escalation paths for high-risk applications. This creates boundaries within which innovation can safely occur.
- Identify one or two high-value, lower-risk pilot projects that can demonstrate early wins. Use these as opportunities to refine your governance approach while delivering tangible value. Success with these initial projects builds credibility and momentum for the AI CoE.
I’ve seen many organisations falter by trying to build a perfect CoE (whether that is for cloud or AI adoption) before acting. Instead, focus on creating an AI CoE that can deliver early wins while establishing core governance capabilities. You can then evolve the structure as your organisation’s AI adoption matures.
Remember, the goal isn’t perfection from day one but rather creating a foundation that can evolve as your AI maturity grows. Start with clear principles and processes, then refine based on practical experience and stakeholder feedback.
The future belongs to organisations that can balance innovation with governance, speed with control, and technical excellence with ethical considerations. Your AI CoE will be at the heart of that balance.
Let's Continue the Conversation
I hope this article has provided useful insights about building an AI Centre of Excellence. If you'd like to discuss how these concepts apply to your organisation's specific context, I welcome the opportunity to exchange ideas.
About the Author
Mario Thomas is a transformational business leader with nearly three decades of experience driving operational excellence and revenue growth across global enterprises. As Head of Global Training and Press Spokesperson at Amazon Web Services (AWS), he leads worldwide enablement delivery and operations for one of technology's largest sales forces during a pivotal era of AI innovation. A Chartered Director and Fellow of the Institute of Directors, and an alumnus of the London School of Economics, Mario partners with Boards and C-suite leaders to deliver measurable business outcomes through strategic transformation. His frameworks and methodologies have generated over two-billion dollars in enterprise value through the effective adoption of AI, data, and cloud technologies.