Cookie Consent

I use cookies to understand how my website is used. This data is collected and processed directly by me, not shared with any third parties, and helps us improve our services. See my privacy and cookie policies for more details.

Completing the AI Strategy Journey: From Policy to Practice Through Coherent Actions

Llantwit Major | Published in AI and Board | 14 minute read |    
A grand concert hall with a full orchestra mid-performance, perfectly synchronised under the conductor's dynamic leadership. Every section plays in harmony with subtle motion blur suggesting bow movements, while the audience sits in shadow, leaning forward in engagement. Golden stage lighting creates unity across the entire ensemble, representing coherent actions transforming strategy into systematic execution (Image generated by ChatGPT 5)

Deloitte’s 2025 Global Board Survey finds that whilst 69% of boards now discuss AI regularly, only 33% feel equipped to oversee AI strategy effectively. Meanwhile, MIT’s State of AI in Business Report reveals workers at over 90% of organisations use personal AI tools, creating a shadow AI economy that operates beyond governance reach whilst often delivering hidden value that formal programmes miss. This execution gap exposes strategy’s ultimate test: transforming insight and policy into action that compounds rather than fragments.

Three weeks ago, I showed how business cases aren’t strategy. Two weeks ago, the Six Concerns revealed why project-level governance fails. Last week, the Complete AI Framework provided guiding policy for systematic transformation. This final article completes the journey with coherent actions that transform policy into practice.

Richard Rumelt teaches that strategy requires coherent actions – coordinated steps that reinforce each other. For AI transformation, this means carefully sequencing initiatives so each builds capabilities the next requires whilst addressing multiple concerns simultaneously.

The actions that follow transform the Complete AI Framework from policy into practice. They concentrate resources on leverage opportunities our diagnosis revealed, prevent cascade failures through addressing simultaneous concerns, and turn multi-speed adoption from apparent weakness into strategic strength.

Day 1: The AI amnesty catalyst

The most powerful first action counterintuitively begins with amnesty, not prohibition. MIT’s research reveals workers at over 90% of companies use personal AI tools, mostly without governance or support, whilst Gartner predicts 30% of generative AI projects will be abandoned after proof-of-concept by the end of 2025. This shadow AI paradox – ungoverned tools thriving whilst formal initiatives fail – represents both risk and opportunity. The AI amnesty transforms this hidden activity into visible advantage whilst addressing three of the Six Concerns immediately.

Launch the amnesty immediately, before establishing frameworks that might drive innovation further underground. Announce a 30-day window where employees can disclose their AI usage without consequences, register the tools they’ve found valuable, and share what problems they’re solving. Position this as organisational learning, not surveillance. The message from the Board must be clear: we want to enable and govern your innovation, not punish your initiative. My detailed guide to AI amnesty programmes provides the implementation blueprint, but the strategic insight matters most – amnesty creates the visibility needed for all subsequent actions whilst building Stakeholder Confidence through trust rather than enforcement.

Amnesty immediately addresses three of the Six Concerns. It makes Risk Management proactive rather than reactive, channels Innovation rather than suppressing it, and builds Stakeholder Confidence through trust rather than enforcement. These discoveries then cascade through every subsequent action – informing AI Centre of Excellence (AI CoE) priorities, shaping portfolio decisions, generating metrics insights, and guiding scaling choices. Each action amplifies the next.

The amnesty’s timing and structure create leverage for subsequent actions. Conducting it organisation-wide prevents fragmentation that would undermine portfolio orchestration. Setting a definitive close date creates urgency that accelerates AI CoE formation. Documenting discoveries provides the data foundation for comprehensive measurement. Each element reinforces the next, creating momentum from day one.

Week 2-4: Establish the AI CoE as force multiplier

With shadow AI surfacing through amnesty, organisations need mechanisms to transform discoveries into governed capabilities. The AI CoE emerges not as another governance layer but as the operational engine that concentrates resources where they generate maximum advantage. As I detailed in my earlier series on AI CoEs, this isn’t about creating another IT function but establishing business-led capability building. IBM’s research shows organisations with AI CoEs report better strategic outcomes through centralised coordination, reducing deployment bottlenecks whilst optimising procurement and governance across the enterprise.

Structure the AI CoE to explicitly address the interconnected nature of the Six Concerns revealed in our diagnosis. For Strategic Alignment, grant authority to redirect resources from low-impact pilots to high-leverage initiatives. For Ethical and Legal Responsibility, establish dynamic frameworks that evolve with technology rather than freezing in static compliance. For Financial and Operational Impact, implement portfolio approaches that capture compound value rather than project-level returns. For Risk Management, develop predictive capabilities that anticipate emergent threats. For Stakeholder Confidence, create transparent communication channels across diverse groups. For Safeguarding Innovation, protect experimentation spaces whilst maintaining governance guardrails.

The AI CoE’s charter must embrace three realities that create coherence across actions, implementing what I call minimum lovable governance – just enough structure to demonstrate good faith whilst preserving organisational agility. First, multi-speed adoption isn’t failure but feature – the AI CoE orchestrates different velocities for mutual reinforcement rather than forcing synchronisation. Second, the Six Concerns require simultaneous attention – sequential addressing creates the cascade failures diagnosed earlier. Third, innovation emerges from unexpected sources – the AI CoE channels shadow AI discoveries into competitive advantage rather than suppressing them.

This action builds directly on amnesty discoveries whilst enabling portfolio orchestration. The champions identified through amnesty become the AI CoE’s ambassadors. The tools discovered become the technology foundation. The use cases revealed guide priority setting. Meanwhile, the AI CoE’s establishment creates the governance machinery needed for systematic portfolio management, comprehensive metrics, and scaling decisions. Each action amplifies the others, concentrating force rather than dispersing effort.

Quarter 1: Launch portfolio orchestration that creates compound value

With amnesty revelations processed and the AI CoE operational, Quarter 1 focuses on portfolio orchestration. This isn’t traditional portfolio management that simply tracks multiple projects. It’s strategic concentration of resources where they generate maximum leverage – turning multi-speed adoption from apparent weakness into competitive strength.

Begin by mapping initiatives against the Complete AI Framework. Where does each sit within the Five Pillars? What maturity stage has each function reached? Which value dimensions does each capture? Most critically, which combinations create compound effects rather than conflicts?

The amnesty revealed where value actually emerges – prioritise those areas. The CoE identified capability gaps – fill them systematically. Design the portfolio through careful sequencing:

McKinsey’s 2025 State of AI research shows organisations that redesign workflows to integrate AI see greater EBIT impact. Portfolio orchestration ensures this happens systematically. Marketing’s rapid experimentation discovers governance approaches finance can adapt. Operations’ careful automation develops testing protocols HR can leverage. Each function’s natural velocity creates value for others, building the coherence Rumelt describes.

Quarter 2: Implement three-dimensional metrics aligned with the Six Concerns

Traditional metrics can’t capture AI’s compound value creation or detect cascade failures before they manifest. Quarter 2 establishes three-dimensional measurement explicitly aligned with each of the Six Concerns, ensuring governance tracks what matters rather than what’s easily measured. This creates the feedback loops that enable continuous adjustment – essential for coherent actions that must adapt whilst maintaining strategic direction.

Metric TypeDefinitionSix Concerns AlignmentExample MetricsData Sources
Leading IndicatorsPredictive of future value; track proactive inputs and early signalsSafeguarding Innovation, Strategic AlignmentPrototype velocity (ideas to pilot); Adoption rates (% using AI for >30% tasks); Data quality scores; Skills development progress; Revenue potential via innovation tracking (Well-Advised)Innovation tracking systems; Usage analytics; Data governance tools; Learning platforms
Lagging IndicatorsMeasure past results; validate impact post-implementationFinancial Impact, Risk Management outcomesRevenue uplift from AI initiatives; Cost reduction achieved; Incident rates and severity; Compliance audit results; Operational improvement metrics (Well-Advised)Financial systems; Risk registers; Audit reports; P&L statements
Predictive IndicatorsForward-looking AI-powered forecasts and simulationsStakeholder Confidence, Ethical Responsibility, Financial Impact, Risk ManagementCustomer churn risk models; Bias detection predictions; Regulatory compliance forecasts; Talent retention projections; Customer value projections (Well-Advised); Future revenue impact models; Emerging threat detectionAI analytics platforms; Ethics monitoring tools; Regulatory tracking; HR systems; Financial forecasting tools; Risk prediction systems

Deploy these metrics to create coherence across the portfolio. Leading indicators from innovation experiments inform risk management protocols. Lagging measures from successful implementations guide scaling decisions. Predictive models anticipate where ethical challenges might emerge, enabling proactive intervention. The three-dimensional approach ensures no single concern dominates at others’ expense – preventing the cascade failures our diagnosis revealed. Gartner’s 2025 Hype Cycle research shows 30% of generative AI projects are abandoned post-POC without proper metrics frameworks, underscoring the importance of comprehensive measurement.

The World Economic Forum’s 2025 AI Adoption report shows 60-70% adoption in data-rich industries versus less than 20% elsewhere, highlighting the importance of measuring readiness as well as results. These metrics must capture both current performance and future potential, ensuring Boards govern transformation rather than just tracking activity. The insights generated here directly inform Quarter 3’s scaling decisions, ensuring successes are amplified whilst failures are contained early.

Quarter 3-4: Scale successes whilst learning from failures

By Quarter 3, patterns emerge from orchestrated experimentation. Some initiatives deliver compound value across multiple concerns. Others struggle despite sound business cases. The coherent action now involves systematic scaling of successes whilst gracefully sunsetting failures – both essential for transformation that builds competitive advantage rather than accumulating technical debt.

Scaling requires extracting principles that made initiatives successful and applying them where they create maximum leverage. If marketing’s content generation succeeded through tight human-AI collaboration loops, that principle might transform customer service automation. If finance’s breakthrough came from synthetic data generation, that capability could accelerate product development. This isn’t just expanding what works – it’s concentrating successful patterns where they generate compound advantage.

NACD’s 2025 Board Outlook reveals 62% of boards now allocate specific AI agenda time, with those maintaining consistent oversight reporting better innovation protection. Systematic scaling creates this protection by building on proven foundations rather than repeating experiments. Document why initiatives failed – technical immaturity, organisational resistance, or governance gaps – and share these lessons transparently. Redirect resources from failed experiments to proven successes, concentrating force rather than spreading it thin.

Design scaling actions across three dimensions that reinforce each other. Vertical scaling within functions creates depth – moving from Adopting to Transforming stages. Horizontal scaling across functions creates breadth – spreading successful approaches enterprise-wide. Ecosystem scaling with partners creates network effects – extending capabilities through value chains whilst building Stakeholder Confidence across the wider business network. WEF’s 2025 research emphasises ecosystem approaches enable organisations to achieve superior innovation outcomes, with adaptable ecosystems yielding faster innovation cycles and broader business value.

These scaling decisions create coherence by ensuring each success amplifies others. Vertical depth in one function provides the expertise for horizontal expansion. Horizontal breadth generates the data for ecosystem partnerships. Ecosystem connections accelerate vertical development through external learning whilst strengthening multiple concerns simultaneously. This kernel ensures actions cohere across concerns – ecosystems bolster Ethical Responsibility via shared standards, enhance Financial Impact through partnerships, build Stakeholder Confidence through transparency, whilst Safeguarding Innovation through collective learning, completing the holistic loop. The playbook transforms isolated successes into systematic capability that competitors struggle to replicate.

The compound advantage formula

These coherent actions don’t just implement the Complete AI Framework – they concentrate force on the leverage points our diagnosis revealed. Instead of addressing the Six Concerns sequentially and creating cascade failures, they tackle them simultaneously to prevent vulnerabilities. Rather than forcing all functions to synchronise, they orchestrate multi-speed adoption to transform apparent fragmentation into strategic advantage. Where traditional metrics capture only isolated returns, this approach measures multi-dimensional value to reveal compound benefits.

The power lies in how each action creates conditions for the next to succeed. Amnesty surfaces hidden innovation whilst building the trust necessary for transformation. These discoveries feed directly into the AI CoE, which transforms them into governed capabilities that shape portfolio decisions. The portfolio orchestration ensures initiatives reinforce rather than undermine each other, whilst comprehensive metrics detect problems before they cascade into failures. When scaling begins, it multiplies proven successes whilst systematic learning from failures accelerates future improvement – creating the self-reinforcing momentum that Rumelt identifies as strategy’s hallmark.

This approach succeeds precisely because it accepts realities that traditional governance resists. When functions naturally adopt AI at different paces, we orchestrate the variation for mutual benefit rather than forcing artificial synchronisation. When initiatives fail – as some inevitably will – we treat them as learning accelerators rather than shameful secrets. When value emerges in unexpected dimensions, our measurement systems capture what matters rather than what’s conventional. For organisations willing to commit, these apparent tensions become sources of competitive advantage.

The boardroom commitment

Executing this playbook demands boardroom commitments that fundamentally reimagine governance itself. PwC’s 2025 AI Agent Survey shows 88% of companies plan to increase AI budgets, yet McKinsey finds only 13% of employees use generative AI for more than 30% of their tasks. This chasm between investment and implementation won’t close through traditional oversight – it requires patient capital that prioritises learning over immediate returns, systematic capability building over quick wins, and protected innovation spaces within appropriate governance boundaries.

The transformation demands evolving governance mechanisms to match AI’s velocity. When quarterly board meetings can’t keep pace with AI’s evolution, continuous oversight through dedicated AI committees becomes essential. When traditional risk frameworks fail to capture emergent patterns, organisations need dynamic assessment approaches that learn and adapt. When project-by-project approval processes constrain transformation, portfolio orchestration must replace them. With Stanford’s 2025 AI Index showing legislative AI mentions increased 21%, regulatory acceleration demands equally agile governance responses.

Perhaps most critically, Boards must embody the transformation they seek to govern. This means using AI tools in board meetings to demonstrate commitment, sharing lessons from failures to encourage organisational learning, and celebrating systematic progress rather than isolated wins. When the boardroom embraces rather than merely oversees AI transformation, it creates the organisational confidence essential for genuine change.

From strategy to systematic advantage

Four weeks ago, we began by exposing how accumulating business cases creates fragmentation rather than transformation. Through diagnosing the Six Concerns as an interconnected system, establishing the Complete AI Framework as guiding policy, and now implementing coherent actions that concentrate force on leverage points, we’ve traced the complete strategic journey from confusion to capability.

The Coherent Actions Playbook transforms this journey from concept to commitment. Each action builds on previous foundations whilst enabling subsequent steps. Each addresses multiple concerns simultaneously rather than creating sequential vulnerabilities. Each concentrates resources where they generate maximum advantage rather than spreading effort across disconnected initiatives. This is the coherence that transforms strategy from intention into competitive advantage.

McKinsey research shows companies that achieve enterprise-wide technology transformations can deliver 3x EBITDA lift compared to those pursuing isolated initiatives. Deloitte’s survey reveals the 33% of boards that feel equipped for AI governance report improved innovation outcomes and value creation. Stanford’s AI Index documents AI investment has increased ninefold since 2016, yet most organisations still lack the systematic approach to capture this value.

Boards face a strategic choice with existential implications. Continue approaching AI through disconnected projects, hoping incremental improvements somehow cohere whilst competitors build systematic capability. Or commit to coherent actions that transform the Complete AI Framework from policy into practice, from intention into advantage, from strategy into market leadership.

The playbook provides the choreography. The Complete AI Framework supplies the structure. The Six Concerns diagnosis reveals the challenge. But transformation requires one final element: boardroom courage to commit to systematic change rather than comfortable incrementalism. Those Boards that find this courage won’t just govern AI effectively – they’ll orchestrate the transformation that defines their organisations’ next decade.

Strategy without action remains intention. Action without coherence becomes chaos. But coherent actions that reinforce each other, concentrating force on leverage points to build systematic advantage? That’s how Boards transform AI from expensive experimentation into compound value creation that competitors can’t replicate. The playbook is complete. Boards must now decide: commit to this systematic transformation now, or cede the stage to competitors who will.

Let's Continue the Conversation

Thank you for reading about the AI Coherent Actions Playbook for transforming strategy into systematic advantage. I'd welcome hearing about your Board's journey from intention to implementation - whether you're launching AI amnesty programmes to surface shadow innovation, establishing AI Centres of Excellence that concentrate resources for compound value, implementing three-dimensional metrics that capture what matters, or discovering which scaling patterns create competitive advantage rather than accumulating technical debt.




About the Author

Mario Thomas is a transformational business leader with nearly three decades of experience driving operational excellence and revenue growth across global enterprises. As Head of Global Training and Press Spokesperson at Amazon Web Services (AWS), he leads worldwide enablement delivery and operations for one of technology's largest sales forces during a pivotal era of AI innovation. A Chartered Director and Fellow of the Institute of Directors, and an alumnus of the London School of Economics, Mario partners with Boards and C-suite leaders to deliver measurable business outcomes through strategic transformation. His frameworks and methodologies have generated over two-billion dollars in enterprise value through the effective adoption of AI, data, and cloud technologies.