Cookie Consent

I use cookies to understand how my website is used. This data is collected and processed directly by me, not shared with any third parties, and helps us improve our services. See my privacy and cookie policies for more details.

Tagged with: #governance

Posts tagged with #governance set out how to ensure AI decisions align with organisational values through governance structures that balance agility with appropriate controls.

The AI Maturity Mirage: Diagnosing the Gap Between Investment and Readiness

Llantwit Major | Published in AI and Board | 11 minute read |    
A glass-walled boardroom at dusk showing executives reviewing glowing data visualisations, with the window reflection revealing fragmented metrics and red indicators to illustrate the gap between perceived and actual AI maturity (Image generated by ChatGPT 5)

Boards frequently overestimate AI maturity by focusing on tool deployments rather than genuine capabilities, mistaking isolated pilot successes for systemic organisational readiness. This article exposes the three patterns that create the illusion—tool-centric thinking, pilot success traps, and hype-driven metrics—and provides a diagnostic framework to reveal true position and enable targeted advancement.


Minimum Lovable Governance: The AI Operating Principle Boards Should Use

London | Published in AI and Board | 13 minute read |    
A lightweight metal arbour frames an open pathway through a landscaped garden at dawn, representing governance as structure that guides and supports growth rather than constrains it (Image generated by ChatGPT 5)

Minimum lovable governance marks a shift from episodic compliance scrambles to continuous, embedded oversight that people actually want to use. In this article I explain how governance can achieve necessary guardrails whilst earning adoption rather than resistance — like an arbour that guides growth without constraining it. For Boards, minimum lovable governance presents a practical path: the operating principle that makes AI governance work when traditional approaches simply get routed around.


Agentic AI: Strip Away the Hype and Understand the Real Strategic Choice

Llantwit Major | Published in AI and Board | 17 minute read |    
Modern corporate boardroom scene split between thoughtful business executives on the left working with documents representing human-in-the-loop decision-making, and multiple glowing AI agent representations on the right operating autonomously in parallel, symbolising the strategic choice about where to transfer agency from humans to machines (Image generated by ChatGPT 5)

Agentic AI has become this year’s poster child, dethroning generative AI as the technology everyone wants to discuss. Yet fundamental misunderstandings about what agentic systems actually do create barriers to successful adoption. This article demystifies the hype by revealing the core truth: agentic AI is generative AI in a loop, where the machine drives iteration instead of a human, making the strategic question not about technology sophistication but where to consciously transfer decision-making agency from people to systems, and at what scale.


Completing the AI Strategy Journey: From Policy to Practice Through Coherent Actions

Llantwit Major | Published in AI and Board | 14 minute read |    
A grand concert hall with a full orchestra mid-performance, perfectly synchronised under the conductor's dynamic leadership. Every section plays in harmony with subtle motion blur suggesting bow movements, while the audience sits in shadow, leaning forward in engagement. Golden stage lighting creates unity across the entire ensemble, representing coherent actions transforming strategy into systematic execution (Image generated by ChatGPT 5)

Deloitte’s 2025 survey shows 69% of boards discuss AI regularly yet only 33% feel equipped to oversee it, whilst MIT finds workers at over 90% of companies already use shadow AI without governance – exposing the execution gap between strategy and action. In this article, I provide sequenced, mutually reinforcing actions that transform the Complete AI Framework from guiding policy into systematic execution, building compound advantage from Day 1 amnesty through Quarter 4 scaling rather than accumulating another collection of disconnected initiatives.


AI's Interconnected Challenge: Diagnosing the Six Concerns of the Board

Sydney | Published in AI and Board | 12 minute read |    
A concert hall with a conductor at the podium studying six different musical scores spread before them, with six distinct beams of stage light illuminating different sections of empty orchestra seats, representing the Six Concerns that must be understood as an interconnected system rather than isolated elements (Image generated by ChatGPT 5)

The true AI governance challenge isn’t pilot failures – it’s that Boards’ six core concerns demand simultaneous orchestration yet receive sequential attention through project-level adoption. In this article, I show how these interconnected priorities form the proper diagnostic lens for AI governance, revealing why addressing them together as a whole rather than individually determines the difference between transformation and yet another failure.


After the AI Amnesty: Practical Steps to Operationalise Discovered Shadow AI

Llantwit Major | Published in AI and Board | 12 minute read |    
A corporate transformation scene showing AI tools transitioning from shadows into organised, illuminated workflows with visible governance frameworks and collaborative teams (Image generated by ChatGPT 5)

Following your AI amnesty programme, speed matters: employees who disclosed shadow AI usage expect enablement, not restriction - the post-amnesty window is critical. In this article, I provide a roadmap for transforming discoveries into governed capabilities that boost organisational productivity and reduce the risk of AI moving back into the shadows again.


Shadow AI and the Case for an AI Amnesty

Llantwit Major | Published in AI and Board | 15 minute read |    
A corporate office environment showing contrasting scenes: shadowy figures using AI tools in darkness on one side, while the other shows transparent, well-lit collaborative AI usage, symbolising the transformation from shadow AI to governed innovation (Image generated by AI)

With a 68% surge in shadow AI usage and 54% of employees saying they would use AI tools even if they were not authorised by the company, Boards face a governance challenge traditional compliance cannot solve. This article presents AI amnesty as an important first step to minimum lovable governance - transforming hidden risks into strategic assets whilst capturing employee-validated innovation. When 95% of enterprise AI pilots fail to deliver measurable ROI yet shadow AI thrives everywhere, the path forward isn’t enforcement but structured disclosure programmes that build trust and position early adopters as governance standard-setters.


AI Sovereignty: A Board's Guide to Navigating Conflicting National Agendas

London | Published in AI and Board | 15 minute read |    
Business executives in suits stand on a glass platform at a crossroads, overlooking three diverging roads leading to a classical European city in soft blue light, a futuristic American skyline with glowing data streams, and a Chinese metropolis with red-toned interconnected bridges, symbolising transparency, innovation, and integration. (Image generated by ChatGPT 5)

AI governance is fragmenting into incompatible systems — Europe prioritising trust through transparency, America pursuing speed through scale, China maintaining control through integration — forcing Boards to choose rather than compromise. In this article, I explore the sovereignty trilemma and present three strategic stances for navigating these landscapes without fracturing your strategy.


Why Boards Need to Watch the EU's General-Purpose AI Code of Practice

London | Published in AI and Board | 15 minute read |    
Abstract visualisation of regulatory divergence between EU and US AI approaches, showing two paths splitting from a central board decision point. (AI-generated)

The EU’s General-Purpose AI (GPAI) Code of Practice, effective August 2025, signals a new era of regulatory divergence. While the EU sets transparency and systemic risk guardrails, the U.S. accelerates through deregulation. For Boards, the challenge isn’t choosing sides but mastering dual-track governance — turning regulatory complexity into strategic advantage.


How Agentic AI Turns Your Biggest Tech Problem into Competitive Advantage

Seattle | Published in AI , Board and Emerging | 11 minute read |    
A dramatic split-screen view of a giant clock mechanism being transformed by autonomous drones. The left side shows rusted, tangled gears and chains representing legacy technical debt, while the right side displays the same clock transformed into a gleaming holographic interface with digital displays and flowing data streams. Tiny maintenance drones work systematically between both sides, symbolising how agentic AI transforms outdated infrastructure into modern, future-ready architectures. (Image generated by ChatGPT 4o).

In the race to deploy agentic AI, organisations face a fundamental paradox: they’re building tomorrow’s autonomous systems on yesterday’s infrastructure. Drawing from the cloud transformation journey, this article explores how the same legacy architectures that constrain agentic AI also present an unprecedented opportunity. By retiring technical debt, organisations can clear the path for technological change that will define the next era of business competition. For Boards, the choice is clear: deploy agents within existing constraints, or use them to architect the foundation for future competitive advantage.