Cookie Consent

I use cookies to understand how my website is used. This data is collected and processed directly by me, not shared with any third parties, and helps us improve our services. See my privacy and cookie policies for more details.

Tagged with: #governance

Posts tagged with #governance set out how to ensure AI decisions align with organisational values through governance structures that balance agility with appropriate controls.

Return-to-Work Briefing: Five Forces Reshaping the Board AI Agenda in 2026

New York | Published in AI and Board | 10 minute read |    
Empty leather executive chair at the head of a polished boardroom table, five luminous streaks of light converging across the table surface toward an open briefing document and pen at the centre, stack of reports to one side. Dawn light breaks through clouds over a city skyline visible through floor-to-ceiling windows, casting warm golden and cool blue reflections across the scene  (Image generated by ChatGPT 5.2)

As we return to our desks for 2026, the AI forces demanding attention aren’t distant possibilities but strategic choices already in motion. AI is embedding itself into enterprise applications faster than organisations can govern it, whilst simultaneously eroding the human capabilities needed to oversee it. In this article I examine five of these forces — AI’s shift from content generation to decision support, inference economics reshaping deployment strategy, embodied AI introducing physical-world liability, verification gaps exposing governance failures, and AI governance professionalising into systematic capability.


The Year AI Grew Up: Five Inflections That Changed the Strategic Calculus in 2025

Washington DC | Published in AI and Board | 14 minute read |    
A sleek white humanoid robot sits among business executives in suits around a polished boardroom table, with documents and laptops before them and a city skyline bathed in golden sunrise light visible through floor-to-ceiling windows, symbolising AI's transition from experimental technology to strategic infrastructure with a seat at the Board table. (Image generated by ChatGPT 5.2)

In 2025 Boardrooms saw a collective shift in how they thought about AI’s role. What they spent 2023 and 2024 reacting to became a question of strategic investment in organisational infrastructure. They moved from “what can it do?” and “should we use it?” to “how do we navigate competing pressures and make this core to how we operate?” In this article, I examine the five interconnected inflections that drove this shift — and what they mean for Boards entering 2026.


The Return of Traditional AI: Organisations Are Rethinking Their LLM-First Strategies

Llantwit Major | Published in ai and board | 8 minute read |    
Precision analogue gauges and industrial instruments including pressure dials, voltmeter, RPM gauge and digital timer arranged on a steel workstation, with neural network visualisations and probability distribution curves floating between monitoring displays in a control room background (Image generated by ChatGPT)

Forty-two percent of companies abandoned the majority of their AI initiatives this year — not because AI failed, but because organisations applied generative AI to problems better solved by traditional machine learning or deterministic automation. This article examines the recalibration underway as sophisticated adopters discover that LLMs excel at specific tasks but prove expensive and unreliable when mismatched to problem domains. For Boards, this shift presents an opportunity to right-size investments through hybrid architectures that match capabilities to problems, capturing value through strategic deployment rather than universal LLM adoption.


A New Grid Actor: AI Infrastructure Is Becoming Energy Infrastructure

London | Published in AI and Board | 9 minute read |    
Modern hyperscale data centre facility at golden hour with small modular reactor cooling towers and wind turbines visible in the background, transmission lines connecting the facilities bidirectionally, set in British countryside with rolling green hills (Image generated by ChatGPT 5)

America’s 19GW power shortfall by 2028 is forcing hyperscalers to build their own generation, but the strategic insight is what happens next: surplus capacity transforms AI infrastructure operators from energy consumers into grid actors. This article examines how distributed generation reshapes the relationship between technology companies and national grids, exploring whether the UK’s smaller system enables transformation or creates concentration risk. For Boards, this evolution demands governance frameworks that address not just AI deployment but grid participation — before the transition forces answers upon them.


The AI Maturity Mirage: Diagnosing the Gap Between Investment and Readiness

Llantwit Major | Published in AI and Board | 11 minute read |    
A glass-walled boardroom at dusk showing executives reviewing glowing data visualisations, with the window reflection revealing fragmented metrics and red indicators to illustrate the gap between perceived and actual AI maturity (Image generated by ChatGPT 5)

Boards frequently overestimate AI maturity by focusing on tool deployments rather than genuine capabilities, mistaking isolated pilot successes for systemic organisational readiness. This article exposes the three patterns that create the illusion—tool-centric thinking, pilot success traps, and hype-driven metrics—and provides a diagnostic framework to reveal true position and enable targeted advancement.


Minimum Lovable Governance: The AI Operating Principle Boards Should Use

London | Published in AI and Board | 13 minute read |    
A lightweight metal arbour frames an open pathway through a landscaped garden at dawn, representing governance as structure that guides and supports growth rather than constrains it (Image generated by ChatGPT 5)

Minimum lovable governance marks a shift from episodic compliance scrambles to continuous, embedded oversight that people actually want to use. In this article I explain how governance can achieve necessary guardrails whilst earning adoption rather than resistance — like an arbour that guides growth without constraining it. For Boards, minimum lovable governance presents a practical path: the operating principle that makes AI governance work when traditional approaches simply get routed around.


Agentic AI: Strip Away the Hype and Understand the Real Strategic Choice

Llantwit Major | Published in AI and Board | 17 minute read |    
Modern corporate boardroom scene split between thoughtful business executives on the left working with documents representing human-in-the-loop decision-making, and multiple glowing AI agent representations on the right operating autonomously in parallel, symbolising the strategic choice about where to transfer agency from humans to machines (Image generated by ChatGPT 5)

Agentic AI has become this year’s poster child, dethroning generative AI as the technology everyone wants to discuss. Yet fundamental misunderstandings about what agentic systems actually do create barriers to successful adoption. This article demystifies the hype by revealing the core truth: agentic AI is generative AI in a loop, where the machine drives iteration instead of a human, making the strategic question not about technology sophistication but where to consciously transfer decision-making agency from people to systems, and at what scale.


Completing the AI Strategy Journey: From Policy to Practice Through Coherent Actions

Llantwit Major | Published in AI and Board | 14 minute read |    
A grand concert hall with a full orchestra mid-performance, perfectly synchronised under the conductor's dynamic leadership. Every section plays in harmony with subtle motion blur suggesting bow movements, while the audience sits in shadow, leaning forward in engagement. Golden stage lighting creates unity across the entire ensemble, representing coherent actions transforming strategy into systematic execution (Image generated by ChatGPT 5)

Deloitte’s 2025 survey shows 69% of boards discuss AI regularly yet only 33% feel equipped to oversee it, whilst MIT finds workers at over 90% of companies already use shadow AI without governance – exposing the execution gap between strategy and action. In this article, I provide sequenced, mutually reinforcing actions that transform the Complete AI Framework from guiding policy into systematic execution, building compound advantage from Day 1 amnesty through Quarter 4 scaling rather than accumulating another collection of disconnected initiatives.


AI's Interconnected Challenge: Diagnosing the Six Concerns of the Board

Sydney | Published in AI and Board | 12 minute read |    
A concert hall with a conductor at the podium studying six different musical scores spread before them, with six distinct beams of stage light illuminating different sections of empty orchestra seats, representing the Six Concerns that must be understood as an interconnected system rather than isolated elements (Image generated by ChatGPT 5)

The true AI governance challenge isn’t pilot failures – it’s that Boards’ six core concerns demand simultaneous orchestration yet receive sequential attention through project-level adoption. In this article, I show how these interconnected priorities form the proper diagnostic lens for AI governance, revealing why addressing them together as a whole rather than individually determines the difference between transformation and yet another failure.


After the AI Amnesty: Practical Steps to Operationalise Discovered Shadow AI

Llantwit Major | Published in AI and Board | 12 minute read |    
A corporate transformation scene showing AI tools transitioning from shadows into organised, illuminated workflows with visible governance frameworks and collaborative teams (Image generated by ChatGPT 5)

Following your AI amnesty programme, speed matters: employees who disclosed shadow AI usage expect enablement, not restriction - the post-amnesty window is critical. In this article, I provide a roadmap for transforming discoveries into governed capabilities that boost organisational productivity and reduce the risk of AI moving back into the shadows again.