AI Centre of Excellence: Your First 90 Days With Well-Advised Value Focus

The first 90 days of your AI Centre of Excellence (AI CoE) determine whether it becomes a transformative force or another governance committee that slows innovation. Through five articles, we’ve established why boards need an AI CoE, explored the eighteen essential functions, mapped multi-speed reality with the AI CoE Simulator, designed adaptive governance structures, and built foundational capabilities across the Five Pillars. Now comes the critical transition: turning these capabilities into tangible business results.
The 90-Day Implementation Imperative
Your board has approved the AI CoE charter. The governance structure is designed. Capabilities are being built across the Five Pillars. Yet success hinges on what happens next; demonstrating value quickly enough to maintain momentum whilst building capabilities systematically enough to ensure sustainability.
The temptation is to spend these first 90 days on further planning, policy development, and infrastructure building. I’ve seen numerous AI CoEs fail by becoming so focused on perfecting governance that they never deliver tangible value. Others rush to launch pilots without systematic selection criteria, creating a scattered portfolio that neither builds capabilities nor delivers strategic impact.
The most successful AI CoEs I’ve seen take a different approach. They balance quick wins with strategic foundation-building, using each pilot as an opportunity to strengthen specific capabilities whilst delivering measurable value. They understand that the first 90 days must achieve three critical objectives simultaneously: demonstrate tangible value to maintain stakeholder support, build essential capabilities across the Five Pillars, and establish momentum that attracts participation rather than resistance.
Introducing the AI Initiative Rubric: Your Pilot Selection Tool
Just as the AI CoE Simulator reveals your multi-speed reality, effective pilot selection requires a systematic framework. One of the tools I’ve created and use regularly is the AI Initiative Rubric, it’s a comprehensive pilot evaluation tool that tells you whether or not a pilot is a good candidate to deliver both value and capability. In this short video, I demonstrate how the rubric works:
The AI Initiative Rubric evaluates potential pilots across all five Well-Advised strategic priorities (strategic alignment), rather than being limited to traditional cost savings or efficiency gains. It also assesses how each pilot builds specific capabilities within the Five Pillars (capability building); giving an overall score and recommendation about whether you should prioritise an initiative, consider it in the near term, add to a future pipeline, or spend more time developing the idea.
Building Your 90-Day Sprint Portfolio
The AI Initiative Rubric helps you construct a portfolio that delivers across multiple time horizons and value dimensions. Your 90-day sprint should include three categories of initiatives, each serving different purposes in your AI journey.
Quick Wins (Days 1-30) focus on demonstrating immediate value whilst building foundational capabilities. These initiatives typically leverage existing data and infrastructure, require minimal new technology, and can show results within weeks. They’re not trivial - they’re strategically chosen to build confidence and capability simultaneously.
Shadow AI amnesty programmes exemplify effective quick wins. By inviting teams to register their unofficial AI experiments without penalty, you simultaneously improve governance visibility, identify innovative use cases, and transform potential risks into managed pilots.
AI literacy programmes for senior executives deliver another form of quick win. An innovation workshop doesn’t just educate - it transforms executives from AI sceptics to informed champions who can spot opportunities and govern effectively. The immediate value lies in improved decision-making quality, whilst the capability building ensures sustained governance excellence.
Foundation Builders (Days 31-60) establish critical infrastructure whilst delivering measurable value. These initiatives create reusable components, establish key processes, and build capabilities that accelerate future pilots.
These initiatives typically establish the first ML operations pipeline, create data quality standards, and build a reusable recommendation framework.
Data quality improvement initiatives often serve as foundation builders. Whilst cleaning customer data for an AI pilot, organisations establish data governance processes, quality metrics, and stewardship roles that benefit all future initiatives. The immediate value comes from the pilot’s improved accuracy; the lasting value comes from institutional data discipline.
Strategic Initiatives (Days 61-90) launch transformative pilots that demonstrate AI’s potential to revolutionise business models whilst building advanced capabilities. These require more investment and time but offer proportionally greater returns. Pilots here will build capabilities across all Five Pillars.
The AI Initiative Rubric in Action: Systematic Pilot Evaluation
The AI Initiative Rubric brings analytical rigour to pilot selection, replacing opinion-based decisions with evidence-based evaluation. Like the AI CoE Simulator, it provides a visual, interactive framework that boards and executives can use to make informed decisions quickly.
It evaluates each potential pilot across multiple dimensions. Well-Advised value assessment examines how initiatives contribute to Innovation, Customer Value, Operational Excellence, Responsible Transformation, and Revenue/Margin/Profit. Rather than focusing solely on cost reduction, this multi-dimensional view reveals hidden value. A supply chain optimisation pilot might score moderately on operational excellence but highly on customer value through improved delivery reliability and responsible transformation through reduced environmental impact.
The capability building assessment maps how each pilot strengthens your Five Pillars maturity. A computer vision quality inspection system builds technical infrastructure through edge computing deployment, operational excellence through ML operations practices, and people capabilities through operator training. This multi-pillar strengthening accelerates overall AI maturity more effectively than initiatives touching single pillars.
The AI Initiative Rubric helps you ensure your 90-day sprint includes the appropriate mix of quick wins versus strategic initiatives, capability building across all Five Pillars, and value delivery across Well-Advised strategic priorities. This balance prevents the common pitfall of pursuing only easy wins that don’t build strategic capability or only complex initiatives that take too long to show results.
Executing the 90-Day Sprint: From Selection to Success
With your portfolio selected through the AI Initiative Rubric, execution determines success. The most effective 90-day sprints follow a proven pattern that balances structure with agility.
Weeks 1-2: Launch and Learn. Begin with your shadow AI amnesty programme, simultaneously launching executive AI literacy sessions. These create immediate engagement whilst surfacing opportunities and risks.
Establish your AI CoE workspace as a visible hub of activity. Whether physical or virtual, it should showcase pilot progress, share learnings, and celebrate successes. Visibility drives engagement; engagement drives adoption.
Weeks 3-4: Quick Win Delivery. Your first quick wins should deliver visible results. An HR analytics pilot that predicts employee retention risk can show results within weeks. A customer service routing AI that improves first-call resolution demonstrates immediate customer value. These early successes build credibility essential for sustaining momentum.
Document observations obsessively during these early wins. What data challenges emerged? Which stakeholders needed more engagement? How did governance processes perform? These insights inform future pilots and strengthen capabilities systematically.
Weeks 5-8: Foundation Building. With quick wins establishing credibility, focus shifts to foundation builders. Launch your ML operations platform pilot, selecting a use case that delivers value whilst establishing reusable infrastructure. Implement data governance processes through a specific pilot that requires high-quality data, making governance tangible rather than theoretical.
Begin stakeholder expansion during this phase. Early success attracts interest from other departments. Use the AI Initiative Rubric to evaluate their proposals, maintaining portfolio discipline whilst accommodating growing demand.
Weeks 9-12: Strategic Launch. The final month launches strategic initiatives identified through AI Initiative Rubric evaluation. These complex pilots benefit from capabilities built during previous weeks - ML operations platforms accelerate development, established governance frameworks reduce risk, and AI-literate executives provide informed oversight.
A company launching predictive quality control in week nine could leverage data pipelines from their week three pilot, governance frameworks from their week five initiative, and ML operations platforms from their week seven foundation builder. What would have taken six months in isolation launched successfully in four weeks.
Managing the Innovation-Governance Balance
The perpetual tension in AI adoption lies between innovation speed and governance rigour. Your 90-day sprint must demonstrate that effective governance accelerates rather than impedes innovation. The key lies in making governance enabling rather than restricting.
Implement “governance as a service” rather than “governance as a gate”. Your AI CoE should provide templates, frameworks, and expertise that make governed AI easier than ungoverned AI. When teams can launch compliant pilots faster through the AI CoE than outside it, governance becomes an accelerator.
Create graduated governance (minimum lovable governance) that matches oversight to risk and maturity. Low-risk experiments need light-touch approval and monitoring. High-risk initiatives require comprehensive oversight. The AI Initiative Rubric’s assessment informs appropriate governance levels, preventing both under-governance of critical initiatives and over-governance of experiments.
Celebrate intelligent failure alongside success. Not every pilot will succeed - that’s the nature of innovation. What matters is learning from failures quickly and systematically. A failed recommendation engine that revealed critical data quality issues provides valuable learning. A terminated chatbot pilot that exposed change management challenges strengthens future initiatives.
Stakeholder Engagement: Building a Movement, Not a Mandate
The most successful AI CoEs create movements rather than mandates. Your 90-day sprint should transform stakeholders from compliance-driven participants to enthusiasm-driven champions. This transformation requires deliberate engagement strategies tailored to different stakeholder groups.
For board members, focus on strategic value and risk mitigation. Use the AI Initiative Rubric’s Well-Advised assessment to show how pilots deliver across all value dimensions. Demonstrate how governance frameworks reduce risk whilst enabling innovation. Provide regular updates that combine quantitative metrics with qualitative stories - the predictive maintenance pilot that prevented a critical failure resonates more than percentage improvements.
Executive engagement requires connecting AI to their specific challenges. The AI Initiative Rubric helps identify pilots that address executive pain points whilst building broader capabilities. When the CFO sees how invoice processing AI improves working capital whilst establishing foundations for broader finance transformation, support follows naturally.
Middle management often represents the greatest resistance and greatest opportunity. They fear AI will diminish their roles or expose their operations to unwanted scrutiny. Counter this by selecting pilots that augment their capabilities rather than replacing them. An AI that helps managers predict project risks and suggest mitigations makes them more effective, transforming potential opponents into champions.
Frontline employees need to see AI as an assistant, not a replacement. Launch pilots that eliminate tedious tasks whilst enabling more meaningful work. Customer service representatives freed from routine queries to handle complex, high-value interactions become AI advocates. Manufacturing operators whose AI assistants predict equipment failures before they occur champion widespread adoption.
Creating Sustainable Momentum
Momentum built during the first 90 days must sustain beyond initial enthusiasm. This requires systematic approaches to learning capture, capability transfer, and success amplification.
Establish “AI Champions” within each business unit - not technical experts but business professionals who understand AI’s potential within their domain. These champions, trained during your literacy programmes, identify new opportunities, support pilot implementation, and share lessons learned across units. They become your distributed AI CoE, scaling impact beyond central team capacity.
Create reusable assets from every pilot. A recommendation engine pilot produces not just an algorithm but a documented framework others can adapt. A data quality initiative delivers not just clean data but established processes others can follow. This asset creation multiplies the value of every initiative.
You’re absolutely right. Here’s a rewritten version that flows more naturally:
Measuring Success: Multi-Dimensional Metrics
Traditional ROI calculations capture only a fraction of AI CoE value during the first 90 days. Success measurement must reflect the multi-dimensional nature of your objectives, balancing immediate returns with capability building and strategic positioning.
When measuring Well-Advised value delivery, you’re looking for evidence across all five strategic priorities. Innovation reveals itself through new capabilities that didn’t exist before and dramatic improvements in time-to-market for new offerings. Customer value emerges not just in satisfaction scores but in fundamental experience transformations - the difference between answering queries faster and anticipating needs before they arise. Operational excellence extends beyond simple efficiency gains to encompass quality improvements that change competitive dynamics.
The responsible transformation dimension often proves most challenging to measure yet most critical to sustain. It’s found in the trust you build with stakeholders who see AI enhancing rather than threatening their roles, and in risk metrics that show governance preventing problems rather than reacting to them. Revenue, margin and profit improvements manifest not only through cost reduction but through entirely new revenue streams enabled by AI capabilities.
Capability maturity across the Five Pillars tells another crucial story. Watch how governance evolves from reactive scrambling to proactive enablement, how technical infrastructure transforms from experimental patches to production-grade platforms. These improvements represent invested value; foundations that accelerate every future initiative. The progression from ad-hoc to systematic approaches multiplies your AI CoE’s impact exponentially.
Perhaps most telling are the adoption velocity indicators. When departments shift from reluctant compliance to active pursuit of AI CoE partnership, when pilot launch times compress from months to weeks, when the percentage of pilots reaching production climbs steadily - these signals reveal whether you’re building genuine momentum or merely checking boxes.
The AI Initiative Rubric synthesises these diverse metrics into coherent dashboards that tell your complete 90-day story. Board members see beyond individual pilot returns to understand how systematic capability building and accelerating adoption create compound value. This comprehensive view transforms budget discussions from cost justification to investment acceleration.
Common Pitfalls and How to Avoid Them
In supporting organisations through their AI journeys, I’ve seen similar challenges emerge repeatedly. Understanding these patterns helps new AI CoEs avoid unnecessary setbacks.
The Perfection Trap paralyses AI CoEs that wait for perfect data, complete governance frameworks, or ideal infrastructure before launching pilots. Your 90-day sprint must embrace “good enough to start” and “minimum lovable governance” whilst building toward excellence. Use the AI Initiative Rubric to identify pilots that can succeed with current capabilities whilst advancing maturity.
The Technology Seduction leads AI CoEs to select pilots based on technical sophistication rather than business value. The AI Initiative Rubric’s Well-Advised assessment prevents this by ensuring every pilot delivers multi-dimensional business value regardless of whether it uses simple regression or complex deep learning.
The Stakeholder Assumption occurs when AI CoEs assume stakeholder support without earning it. Early pilots must address real stakeholder pain points, not theoretical opportunities.
The Governance Gridlock emerges when risk aversion creates approval processes that kill innovation speed. Your 90-day sprint must demonstrate that thoughtful governance accelerates safe innovation rather than preventing it. Light-touch governance for low-risk pilots paired with robust oversight for critical initiatives shows this balance in action.
The Scale Fixation pushes organisations to pursue enterprise-wide implementations before proving concepts. Your 90-day sprint should focus on bounded, meaningful pilots that can scale after proving value.
From 90 Days to Sustained Transformation
As your first 90 days conclude, the transition to sustained operation requires careful orchestration. Success creates its own challenges - demand exceeds capacity, stakeholders expect continuous innovation, and early pilots require scaling decisions.
The foundation you’ve built through systematic pilot selection, balanced portfolio execution, and multi-dimensional value delivery positions you for this transition. Teams trained through early pilots become trainers for the next wave. Infrastructure built for foundation builders supports strategic initiatives. Governance frameworks proven through quick wins scale to enterprise deployments.
Most importantly, the cultural shift initiated during these 90 days - from AI fear to AI enthusiasm, from ungoverned shadow AI to coordinated innovation - creates the conditions for sustained transformation. When business leaders actively seek AI CoE partnership because it accelerates their success, you’ve achieved the ultimate quick win: transforming governance from a barrier into an enabler.
Your Next Board Meeting: From Theory to Results
Three months from now, your board will convene to assess AI CoE progress. Instead of presenting theoretical frameworks and planned initiatives, you’ll demonstrate tangible results. The AI Initiative Rubric dashboard will show a portfolio of pilots delivering value across all Well-Advised dimensions. Capability assessments will indicate measurable progress across all Five Pillars. Adoption metrics will reveal accelerating momentum.
But beyond metrics, you’ll share transformation stories. The shadow AI experiment that became a strategic initiative. The sceptical executive who became an AI champion. The routine process that became a competitive advantage. These stories, backed by systematic measurement and enabled by thoughtful governance, transform AI from an ungoverned risk into a governed opportunity.
The journey from AI CoE approval to demonstrated value in 90 days requires more than good intentions. It demands systematic pilot selection through tools like the AI Initiative Rubric, balanced portfolio execution across quick wins and strategic initiatives, and relentless focus on building capabilities whilst delivering value.
Next week, we’ll explore how to scale beyond pilots, leveraging the foundations built during your first 90 days to transform successful experiments into enterprise capabilities. The momentum you’ve created becomes the engine for sustained transformation, but only if you navigate the unique challenges of scaling AI initiatives.
Your first 90 days determine whether your AI CoE becomes a transformative force or another well-intentioned initiative that fails to deliver. By following this roadmap, using systematic tools like the AI Initiative Rubric, and maintaining unwavering focus on multi-dimensional value delivery, you ensure your AI Centre of Excellence lives up to its name - becoming truly excellent at turning AI potential into business reality.
Let's Continue the Conversation
Thank you for following my AI Centre of Excellence series. If your organisation is ready to launch its AI CoE with systematic pilot selection and multi-dimensional value delivery, I'd welcome the opportunity to discuss how the AI-Rubric and proven 90-day sprint approaches can accelerate your journey.
About the Author
Mario Thomas is a transformational business leader with nearly three decades of experience driving operational excellence and revenue growth across global enterprises. As Head of Global Training and Press Spokesperson at Amazon Web Services (AWS), he leads worldwide enablement delivery and operations for one of technology's largest sales forces during a pivotal era of AI innovation. A Chartered Director and Fellow of the Institute of Directors, and an alumnus of the London School of Economics, Mario partners with Boards and C-suite leaders to deliver measurable business outcomes through strategic transformation. His frameworks and methodologies have generated over two-billion dollars in enterprise value through the effective adoption of AI, data, and cloud technologies.