Cookie Consent

I use cookies to understand how my website is used. This data is collected and processed directly by me, not shared with any third parties, and helps us improve our services. See my privacy and cookie policies for more details.

AI Centre of Excellence: The Essential Functions of the Five Pillars

Llantwit Major | Published in AI and Board | 12 minute read |    
A modern control room with 18 illuminated panels arranged in five distinct colour-coded groups, each displaying abstract representations of AI governance functions, with silhouettes of executives observing the unified system (generated by ChatGPT 4o).

What does an AI Centre of Excellence (AI CoE) actually do? Last week, I established why boards need an AI CoE to govern AI’s unprecedented decision velocity and multi-speed adoption challenges. But recognising the need is only the first step. The real question is: what specific functions must an AI CoE perform to deliver effective governance of the use of AI?

Through my work with organisations navigating AI adoption, I’ve identified eighteen essential functions of an AI CoE, which provide comprehensive coverage while remaining manageable — the minimum lovable governance for enterprise AI.

These functions naturally align with the Five Pillars capability areas, ensuring systematic development across all dimensions necessary for AI success. More importantly, they adapt to your organisation’s position in the AI Stages of Adoption (AISA), allowing focused investment where it matters most and at the right time.

The Architecture of AI Governance

The eighteen functions work as an integrated system, not a checklist. Weakness in one area cascades through others, while strength in foundational functions amplifies success across the entire system. Understanding these interconnections helps Boards prioritise investment and avoid common implementation pitfalls.

This interconnectedness explains why the functions group naturally around the Five Pillars. Each pillar requires specific governance capabilities, but success demands they work in harmony. Let me detail each pillar’s essential functions and their critical interdependencies.

Governance & Accountability: The Foundation

The four functions of the Governance & Accountability pillar establish the ethical and legal framework within which all AI operates. These aren’t abstract policies but operational capabilities that govern millions of daily decisions.

Human-AI collaboration frameworks define the boundaries between human and machine decision-making. This function establishes when AI can act autonomously, when it must seek human approval, and when humans must override AI recommendations. Consider a financial services firm that might create clear escalation thresholds: AI could approve loans under £10,000 with standard risk profiles, but unusual patterns or larger amounts would trigger human review. Without this function, organisations face either paralysing every AI decision with human approval or accepting unrestricted autonomous operation.

AI vulnerability management addresses the unique risks AI systems face. Unlike traditional software vulnerabilities, AI systems can be compromised through data poisoning, adversarial examples, or model manipulation. This function implements continuous monitoring for these AI-specific threats. Imagine a retailer discovering competitors manipulating product reviews to confuse their recommendation engine - a vulnerability management function would identify such anomalies and initiate countermeasures. Organisations without this function remain blind to AI-specific attacks until damage is done.

Misuse and harmful content prevention ensures AI systems cannot be weaponised or generate harmful outputs. This goes beyond simple content filtering to include use case restrictions and comprehensive audit trails. Consider a healthcare organisation whose diagnostic AI might be queried in ways that could enable insurance discrimination. Proper misuse prevention protocols would block such usage and provide the audit trail necessary for regulatory reporting. Without this function, organisations face liability for AI misuse they didn’t even know was occurring.

Accountability and transparency protocols ensure every AI decision can be explained and attributed. This isn’t about making AI algorithms transparent (often impossible with deep learning) but about maintaining clear ownership and explainability for outcomes. When a manufacturer’s AI-driven pricing model might face regulatory scrutiny, transparency protocols would provide complete decision lineage: what data was used, which models were involved, who approved the deployment, and how outcomes were monitored. Organisations lacking this function cannot defend their AI decisions to regulators, courts, or their own Boards.

Technical Infrastructure: The Enabler

The three Technical Infrastructure functions provide the foundational capabilities that make governed AI possible. Without robust technical foundations, even the best governance intentions fail in implementation.

Data quality and governance standards establish the foundation for trustworthy AI. This function defines how data is collected, cleaned, validated, and maintained throughout its lifecycle. Poor data quality doesn’t just reduce AI accuracy, it can embed systematic biases and create legal liability. Consider a bank whose loan approval AI might show bias against certain postcodes, not because of the algorithm but because historical data reflected past discriminatory practices. A proper data governance function would include bias detection in data preparation, not just model testing. Organisations without this function build sophisticated AI on foundations of sand.

Security and privacy controls protect both AI systems and the data they process. This function implements technical safeguards against unauthorised access, ensures compliance with privacy regulations, and protects intellectual property embedded in AI models. Imagine a pharmaceutical company whose drug discovery AI contains billions of pounds worth of research insights. Security controls would include not just access management but techniques to prevent model extraction attacks where competitors might reconstruct proprietary models. Without this function, organisations risk catastrophic breaches of both data and competitive advantage.

Technical architecture guidelines ensure AI systems integrate properly with existing infrastructure while maintaining scalability. This function establishes standards for everything from model deployment patterns to API designs. Consider a retailer whose successful store-level inventory AI couldn’t scale to online operations due to architectural mismatches. Proper architecture guidelines would ensure compatibility from the start. Organisations lacking this function create AI silos that limit value and increase technical debt.

Operational Excellence: The Sustainer

The three Operational Excellence functions ensure AI systems deliver reliable value over time, not just impressive pilots. These functions transform experimental success into operational reality.

MLOps and model lifecycle management governs AI models from development through retirement. This function tracks model versions, monitors performance degradation, and manages the complex dependencies between models, data, and business processes. Consider an insurance company whose fraud detection model’s accuracy might decline 15% over six months as criminal patterns evolve. A proper MLOps function would detect this drift early and initiate retraining, preventing millions in potential losses. Without this function, AI systems decay invisibly until failures become catastrophic.

Performance monitoring and optimisation continuously tracks AI system effectiveness across multiple dimensions. This goes beyond simple accuracy metrics to include fairness measures, computational efficiency, and business impact. Imagine a logistics company whose routing AI shows 95% accuracy in testing but customer complaints are rising. Performance monitoring might reveal that while routes are theoretically optimal, they don’t account for driver familiarity with neighbourhoods, leading to delayed deliveries. Organisations without this function optimise metrics that don’t matter while missing degradation in real business value.

Integration with existing operations ensures AI enhances rather than disrupts current business processes. This function manages the complex choreography of human and AI work. Consider a medical diagnostic AI that achieves remarkable accuracy but sees low physician adoption. The issue might be that the AI’s workflow doesn’t match clinical patterns; doctors need insights at different decision points than the AI provides. Proper integration design would identify this mismatch early. Without this function, even brilliant AI fails to deliver value because it doesn’t fit how work actually happens.

Value Realisation & Lifecycle Management: The Validator

The four Value Realisation functions ensure AI investments deliver promised returns while managing the full lifecycle from vendor selection through value capture.

Business case evaluation (using Well-Advised dimensions) assesses every AI initiative across all five strategic priorities: Innovation, Customer Value, Operational Excellence, Responsible Transformation, and Revenue. This prevents the common trap of pursuing AI for narrow benefits while missing broader opportunities. Consider a manufacturer’s predictive maintenance AI that initially focuses solely on cost reduction. Well-Advised evaluation might reveal additional opportunities: new service offerings (Innovation), improved customer uptime (Customer Value), and potential licensing to other manufacturers (Revenue). Without this function, organisations systematically undervalue AI investments.

Cost optimisation strategies manage the unique economics of AI systems. Unlike traditional software with predictable costs, AI expenses vary dramatically based on usage patterns, model complexity, and data volumes. Imagine a retailer whose recommendation engine costs explode as the customer base grows, threatening the business case. A cost optimisation function might implement techniques like model distillation and edge deployment, reducing costs 70% while maintaining performance. Organisations lacking this function face runaway AI costs that destroy ROI.

Vendor and IP management navigates the complex landscape of AI suppliers while protecting organisational innovations. This function handles everything from negotiating model licensing to ensuring training data doesn’t compromise proprietary information. Consider a financial services firm whose vendor’s terms might grant rights to insights derived from their data, potentially sharing competitive intelligence. Proper vendor management would catch this before contract signing. Without this function, organisations inadvertently surrender competitive advantage through poor commercial terms.

Benefits tracking across strategic pillars monitors value creation comprehensively, not just financial returns. This function tracks leading indicators (early signals that suggest future value), lagging indicators (confirmed outcomes of past value creation), and predictive indicators (future value potential) across all Well-Advised dimensions. Imagine a healthcare system whose diagnostic AI shows modest cost savings but comprehensive benefits tracking reveals the true value: reduced diagnostic errors preventing malpractice lawsuits, improving patient outcomes, and enhancing physician satisfaction. Organisations without this function miss the full value story, potentially abandoning transformative initiatives based on narrow metrics.

People, Culture & Adoption: The Catalyst

The four People, Culture & Adoption functions address AI’s human dimension; often the difference between technical success and business transformation.

Skills assessment and development builds AI capabilities across the organisation, not just in technical teams. This function identifies capability gaps and implements targeted development programmes. Consider a bank that might discover their biggest AI constraint isn’t data scientists but business analysts who can bridge between AI capabilities and business needs. Their skills programme would then emphasise this translator role. Without this function, organisations create AI capabilities that remain locked in technical silos.

Change management programmes help employees adapt to AI-augmented work environments. This function addresses the full spectrum of human concerns from job security to new ways of working. When a customer service organisation introduces AI assistants, team members might initially resist, viewing them as job threats. An effective change programme would reframe AI as “bionic agents”—enhancing human capability rather than replacing it. Team satisfaction and performance would both improve. Organisations lacking this function face passive resistance that undermines AI value.

Bias identification and mitigation ensures AI systems operate fairly across all populations. This function implements systematic testing for discriminatory outcomes and creates processes for remediation. Consider an employer whose CV screening AI might show bias against candidates from certain universities, not because of programming but because successful employee historical data reflected past hiring biases. A bias mitigation process would include regular audits and diverse review panels. Without this function, organisations face legal liability and reputational damage from discriminatory AI.

Stakeholder education and engagement builds understanding and trust across all groups affected by AI. This function creates tailored communication for employees, customers, partners, and investors. Imagine a manufacturer whose AI-driven quality system initially faces customer scepticism. An engagement programme might include factory tours showing human oversight, transparency reports on AI decisions, and customer advisory panels. Trust and adoption would follow. Organisations without this function struggle with stakeholder resistance based on misunderstanding or mistrust.

The Dynamic Implementation Reality

These eighteen functions aren’t implemented equally at all times. Their relative importance shifts based on your position in the AISA journey.

During Experimenting, organisations need basic governance frameworks and initial technical standards. Focus on human-AI collaboration frameworks, data quality standards, and skills assessment. Don’t over-engineer functions like vendor management when you’re still learning what AI can do.

As you progress to Adopting, accountability protocols and MLOps become critical. You’re moving from proofs of concept to production systems that affect real customers and operations. Security controls and change management can no longer be afterthoughts.

The Optimising stage demands sophisticated performance monitoring and cost optimisation. Your AI systems are scaling, making operational excellence and value realisation functions essential. Integration with existing operations becomes paramount as AI moves from the periphery to the core.

When reaching Transforming, all functions require maturity but stakeholder engagement and bias mitigation become especially critical. You’re fundamentally changing how business operates, requiring deep cultural adaptation and trust.

At Scaling, the emphasis shifts to ecosystem considerations. Vendor management becomes strategic as you coordinate multiple AI partners. Architecture guidelines must support not just internal scale but ecosystem integration.

Common Implementation Mistakes

Three mistakes consistently undermine AI CoE effectiveness:

Attempting all eighteen functions simultaneously overwhelms organisations and dilutes focus. Even with a Board mandate and resources, building all capabilities at once creates confusion and competing priorities. Start with functions critical to your current AISA stage and build systematically.

Skipping foundational functions to chase advanced capabilities creates houses of cards. I’ve seen organisations implement sophisticated MLOps while lacking basic accountability protocols. When incidents occur, they cannot explain who’s responsible for AI decisions. Build foundations before sophistication.

Treating functions as independent workstreams misses crucial interdependencies. Bias mitigation without proper data governance treats symptoms while ignoring causes. Cost optimisation without architecture guidelines creates tactical fixes that don’t scale. Functions must be coordinated, not just completed.

Your Path to Comprehensive AI Governance

Understanding these eighteen functions provides the operational blueprint for your AI CoE. But knowledge without action delivers no value. Next week, in article 3 of this series, I’ll show you how to assess your organisation’s current state against these functions, identifying which need immediate attention based on your position in the AI Stages of Adoption journey.

The eighteen functions aren’t aspirational, they’re the minimum lovable governance (see the book “The Startup Way” for the inspiration for this) for enterprise AI. Whether you’re experimenting with initial pilots or scaling AI across ecosystems, these functions provide the operational framework for success. The question isn’t whether you need all eighteen, but which ones you need most urgently.

Your AI systems continue making millions of decisions. Each decision without proper governance adds risk. But with these eighteen functions operational, those same decisions become sources of competitive advantage. The transformation begins with understanding what comprehensive AI governance requires.

Let's Continue the Conversation

I hope this deep dive into the eighteen essential AI CoE functions has provided practical clarity on operationalising AI governance. If you'd like to discuss implementing these functions in your organisation, I welcome the opportunity to exchange ideas.




About the Author

Mario Thomas is a transformational business leader with nearly three decades of experience driving operational excellence and revenue growth across global enterprises. As Head of Global Training and Press Spokesperson at Amazon Web Services (AWS), he leads worldwide enablement delivery and operations for one of technology's largest sales forces during a pivotal era of AI innovation. A Chartered Director and Fellow of the Institute of Directors, and an alumnus of the London School of Economics, Mario partners with Boards and C-suite leaders to deliver measurable business outcomes through strategic transformation. His frameworks and methodologies have generated over two-billion dollars in enterprise value through the effective adoption of AI, data, and cloud technologies.