Cookie Consent

I use cookies to understand how my website is used. This data is collected and processed directly by me, not shared with any third parties, and helps us improve our services. See my privacy and cookie policies for more details.

Tagged with: #governance

Posts tagged with #governance set out how to ensure AI decisions align with organisational values through governance structures that balance agility with appropriate controls.

The Return of Traditional AI: Organisations Are Rethinking Their LLM-First Strategies

Llantwit Major | Published in ai and board | 8 minute read |    
Precision analogue gauges and industrial instruments including pressure dials, voltmeter, RPM gauge and digital timer arranged on a steel workstation, with neural network visualisations and probability distribution curves floating between monitoring displays in a control room background (Image generated by ChatGPT)

Forty-two percent of companies abandoned the majority of their AI initiatives this year — not because AI failed, but because organisations applied generative AI to problems better solved by traditional machine learning or deterministic automation. This article examines the recalibration underway as sophisticated adopters discover that LLMs excel at specific tasks but prove expensive and unreliable when mismatched to problem domains. For Boards, this shift presents an opportunity to right-size investments through hybrid architectures that match capabilities to problems, capturing value through strategic deployment rather than universal LLM adoption.


A New Grid Actor: AI Infrastructure Is Becoming Energy Infrastructure

London | Published in AI and Board | 9 minute read |    
Modern hyperscale data centre facility at golden hour with small modular reactor cooling towers and wind turbines visible in the background, transmission lines connecting the facilities bidirectionally, set in British countryside with rolling green hills (Image generated by ChatGPT 5)

America’s 19GW power shortfall by 2028 is forcing hyperscalers to build their own generation, but the strategic insight is what happens next: surplus capacity transforms AI infrastructure operators from energy consumers into grid actors. This article examines how distributed generation reshapes the relationship between technology companies and national grids, exploring whether the UK’s smaller system enables transformation or creates concentration risk. For Boards, this evolution demands governance frameworks that address not just AI deployment but grid participation — before the transition forces answers upon them.


The AI Maturity Mirage: Diagnosing the Gap Between Investment and Readiness

Llantwit Major | Published in AI and Board | 11 minute read |    
A glass-walled boardroom at dusk showing executives reviewing glowing data visualisations, with the window reflection revealing fragmented metrics and red indicators to illustrate the gap between perceived and actual AI maturity (Image generated by ChatGPT 5)

Boards frequently overestimate AI maturity by focusing on tool deployments rather than genuine capabilities, mistaking isolated pilot successes for systemic organisational readiness. This article exposes the three patterns that create the illusion—tool-centric thinking, pilot success traps, and hype-driven metrics—and provides a diagnostic framework to reveal true position and enable targeted advancement.


Minimum Lovable Governance: The AI Operating Principle Boards Should Use

London | Published in AI and Board | 13 minute read |    
A lightweight metal arbour frames an open pathway through a landscaped garden at dawn, representing governance as structure that guides and supports growth rather than constrains it (Image generated by ChatGPT 5)

Minimum lovable governance marks a shift from episodic compliance scrambles to continuous, embedded oversight that people actually want to use. In this article I explain how governance can achieve necessary guardrails whilst earning adoption rather than resistance — like an arbour that guides growth without constraining it. For Boards, minimum lovable governance presents a practical path: the operating principle that makes AI governance work when traditional approaches simply get routed around.


Agentic AI: Strip Away the Hype and Understand the Real Strategic Choice

Llantwit Major | Published in AI and Board | 17 minute read |    
Modern corporate boardroom scene split between thoughtful business executives on the left working with documents representing human-in-the-loop decision-making, and multiple glowing AI agent representations on the right operating autonomously in parallel, symbolising the strategic choice about where to transfer agency from humans to machines (Image generated by ChatGPT 5)

Agentic AI has become this year’s poster child, dethroning generative AI as the technology everyone wants to discuss. Yet fundamental misunderstandings about what agentic systems actually do create barriers to successful adoption. This article demystifies the hype by revealing the core truth: agentic AI is generative AI in a loop, where the machine drives iteration instead of a human, making the strategic question not about technology sophistication but where to consciously transfer decision-making agency from people to systems, and at what scale.


Completing the AI Strategy Journey: From Policy to Practice Through Coherent Actions

Llantwit Major | Published in AI and Board | 14 minute read |    
A grand concert hall with a full orchestra mid-performance, perfectly synchronised under the conductor's dynamic leadership. Every section plays in harmony with subtle motion blur suggesting bow movements, while the audience sits in shadow, leaning forward in engagement. Golden stage lighting creates unity across the entire ensemble, representing coherent actions transforming strategy into systematic execution (Image generated by ChatGPT 5)

Deloitte’s 2025 survey shows 69% of boards discuss AI regularly yet only 33% feel equipped to oversee it, whilst MIT finds workers at over 90% of companies already use shadow AI without governance – exposing the execution gap between strategy and action. In this article, I provide sequenced, mutually reinforcing actions that transform the Complete AI Framework from guiding policy into systematic execution, building compound advantage from Day 1 amnesty through Quarter 4 scaling rather than accumulating another collection of disconnected initiatives.


AI's Interconnected Challenge: Diagnosing the Six Concerns of the Board

Sydney | Published in AI and Board | 12 minute read |    
A concert hall with a conductor at the podium studying six different musical scores spread before them, with six distinct beams of stage light illuminating different sections of empty orchestra seats, representing the Six Concerns that must be understood as an interconnected system rather than isolated elements (Image generated by ChatGPT 5)

The true AI governance challenge isn’t pilot failures – it’s that Boards’ six core concerns demand simultaneous orchestration yet receive sequential attention through project-level adoption. In this article, I show how these interconnected priorities form the proper diagnostic lens for AI governance, revealing why addressing them together as a whole rather than individually determines the difference between transformation and yet another failure.


After the AI Amnesty: Practical Steps to Operationalise Discovered Shadow AI

Llantwit Major | Published in AI and Board | 12 minute read |    
A corporate transformation scene showing AI tools transitioning from shadows into organised, illuminated workflows with visible governance frameworks and collaborative teams (Image generated by ChatGPT 5)

Following your AI amnesty programme, speed matters: employees who disclosed shadow AI usage expect enablement, not restriction - the post-amnesty window is critical. In this article, I provide a roadmap for transforming discoveries into governed capabilities that boost organisational productivity and reduce the risk of AI moving back into the shadows again.


Shadow AI and the Case for an AI Amnesty

Llantwit Major | Published in AI and Board | 15 minute read |    
A corporate office environment showing contrasting scenes: shadowy figures using AI tools in darkness on one side, while the other shows transparent, well-lit collaborative AI usage, symbolising the transformation from shadow AI to governed innovation (Image generated by AI)

With a 68% surge in shadow AI usage and 54% of employees saying they would use AI tools even if they were not authorised by the company, Boards face a governance challenge traditional compliance cannot solve. This article presents AI amnesty as an important first step to minimum lovable governance - transforming hidden risks into strategic assets whilst capturing employee-validated innovation. When 95% of enterprise AI pilots fail to deliver measurable ROI yet shadow AI thrives everywhere, the path forward isn’t enforcement but structured disclosure programmes that build trust and position early adopters as governance standard-setters.


AI Sovereignty: A Board's Guide to Navigating Conflicting National Agendas

London | Published in AI and Board | 15 minute read |    
Business executives in suits stand on a glass platform at a crossroads, overlooking three diverging roads leading to a classical European city in soft blue light, a futuristic American skyline with glowing data streams, and a Chinese metropolis with red-toned interconnected bridges, symbolising transparency, innovation, and integration. (Image generated by ChatGPT 5)

AI governance is fragmenting into incompatible systems — Europe prioritising trust through transparency, America pursuing speed through scale, China maintaining control through integration — forcing Boards to choose rather than compromise. In this article, I explore the sovereignty trilemma and present three strategic stances for navigating these landscapes without fracturing your strategy.