Cookie Consent

I use cookies to understand how my website is used. This data is collected and processed directly by me, not shared with any third parties, and helps us improve our services. See my privacy and cookie policies for more details.

Tagged with: #ai-adoption

Navigate the journey from AI experimentation to enterprise-wide implementation with frameworks that address multi-speed adoption across different business functions. These articles explore how organisations progress through the AI Stages of Adoption, from initial pilots to scaled transformation, while building essential capabilities across governance, infrastructure, and culture. Learn practical approaches for overcoming shadow AI, establishing AI Centres of Excellence, and creating sustainable AI practices that deliver measurable business value.

AI and the Director: A Practical Playbook for Governing What You Can't Fully See

London | Published in Board | 11 minute read |    
A figure in a dark suit, partially concealed behind a heavy charcoal velvet curtain, one hand gripping the curtain edge in sharp directional light against a black background — a visual metaphor for the unseen operator whose workings a director is expected to trust without seeing. (Image generated by ChatGPT 5.4)

The informational asymmetry between management and the Board has always been the central tension of governance. For AI, it is no longer manageable through existing structural checks; the distance is not merely larger than previous technology waves, it is qualitatively different. A director must be able to interrogate maturity claims, assess whether governance is operational or merely presentational, and identify which AI risks are personal development challenges and which are failures of oversight itself. The IoD has formally named the gap. This article defines what closing it actually requires: not technical fluency, but specific capacities for independent evaluation mapped against the governance obligations every director carries, and a diagnostic framework for identifying exactly where the work needs to start.


The Great Remaking: Why Fast Following Does Not Work When the Gap Compounds

Llantwit Major | Published in AI | 13 minute read |    
Aerial view of three large tidal whirlpools swirling in a warm golden coastal bay at sunset, surrounded by tree-lined shores and sandy beaches, representing the three self-reinforcing loops — data, talent, and process redesign — that compound the AI advantage gap over time (Image generated by ChatGPT 5.2)

Every previous technology wave rewarded fast followers. Identify what the leaders built, acquire or replicate it, close the gap. That logic fails for The Great Remaking — not because AI is different technology, but because the source of advantage is not a product that can be studied and replicated. It is operational accumulation: proprietary data shaped by AI-integrated workflows, human capability developed through sustained practice, and institutional knowledge embedded through iterative redesign. None of it can be purchased. All of it compounds with time. This article explains the three self-reinforcing loops that make the gap harder to close with every month an organisation defers the decision to redesign.


MCP Explained: The Agent Infrastructure Standard Boards Need to Understand

Llantwit Major | Published in Data | 11 minute read |    
A sleek modern MCP hub on a dark walnut executive desk, with cables of different vintages connecting to surrounding legacy hardware including a CRT monitor, blue LED glowing on the hub. (Image generated by ChatGPT 5.2)

An AI agent that can only see the public internet is no more useful to an organisation’s business than a very expensive search engine. The intelligence is not the constraint. The connectivity is. Model Context Protocol — MCP — is the infrastructure standard that connects agents to the proprietary data, systems, and processes that constitute real competitive advantage. This article explains what MCP is, why the major enterprise vendors have already converged on it, and the governance questions Boards should be asking before their technology teams answer them by default.


The Great Remaking: AI and the Race to Transform the Very Essence of Work

Llantwit Major | Published in AI and Board | 10 minute read |    
Aerial view of tidal sandbars at low tide with water channels carving new patterns through exposed sand, captured at golden hour to show shifting structure and continuous remaking of the coastline (Image generated by ChatGPT 5.2)

Over five decades, five technology revolutions each transformed organisations, but none restructured the essence of work itself. AI does — remaking how organisations think, decide, create, and deliver. The gap between bolting AI onto existing processes and redesigning how work is structured is already producing four times higher total shareholder returns for those who commit. This article defines what the essence of work actually is, why AI is remaking all four dimensions at different speeds, and why The Great Remaking is a race with compounding consequences that late movers cannot close through incremental catch-up.


The Inference Migration: What Consumer Agents Mean for Enterprise AI's Next Phase

New York | Published in AI and Board | 12 minute read |    
A corporate boardroom table overrun with small, friendly red robotic lobsters with glowing blue eyes, perched on laptops, documents, and coffee cups, with a city skyline visible through floor-to-ceiling windows and business charts displayed on a presentation screen (Image generated by ChatGPT 5.2)

Consumers are voluntarily paying $3,650–9,125 annually for always-on AI agents — more than their combined entertainment subscriptions. When ChatGPT followed exactly this pipeline from consumer novelty to shadow enterprise adoption within three years, most organisations were caught unprepared. Agentic AI is now running the same cycle. This article examines the inference migration — the architectural shift from episodic queries to always-on agents, why the determinism objection is narrower than Boards assume, the shadow agentic AI wave already forming, and why governance frameworks established in 2026 will determine which organisations capture agentic value and which scramble to retrofit controls on adoption already underway.


The Verification Premium: What Classical Training Reveals About AI Coding Costs

New York | Published in AI and Board | 13 minute read |    
My desktop setup: reMarkable Paper Pro for ideas, MacBook Air M2, a headless NVIDIA DGX Spark handling the heavy lifting, and the tools behind the experiment — Amazon Kiro, Claude Code, and a terminal window. Plus the late-night lighting that makes it feel like coding in the 1980s again.

AI coding tools don’t close the expertise gap — they amplify it. Research shows senior developers capture twice the productivity gains of juniors, while a randomised controlled trial found experienced developers actually worked slower with AI than without, the hidden taxes of verification offsetting initial speed. This article explores the verification premium — and why Boards should ask not “can we use AI to write code cheaper?” but “do we have the verification capability to ensure AI-generated code creates value rather than debt?”


The AI Talent Bifurcation: Are You Building Skills or Collecting Credentials?

Llantwit Major | Published in AI and Board | 8 minute read |    
Skilled hands using a mallet and chisel to craft precise dovetail joints on a wooden frame in a traditional workshop, with quality woodworking tools laid out on a worn workbench, while a rough unfinished piece of wood with crude cuts sits nearby—same materials, different outcomes depending on capability and craft (Image generated by ChatGPT 5.2)

Workers with genuine AI capabilities command premiums of 28-56%; those targeting AI-exposed roles without substantive skill development face a 29% earnings penalty. The same roles, opposite outcomes, and the difference lies in the quality of capability investment, not access to tools. This article examines why this bifurcation extends to the Boardroom itself, where the IoD now positions AI competence as a core NED responsibility. For Boards, the strategic question becomes: is your workforce developing verification and judgement, or just collecting certifications — and can you tell the difference?


The Redeployment Dividend: Why AI Will Unleash Your People, Not Replace Them

Llantwit Major | Published in AI and Board | 9 minute read |    
Hands carefully transplanting young seedlings into rich soil inside a sunlit greenhouse, with a black seedling tray of fresh plants, a wooden-handled trowel, and gardening gloves resting nearby on warm earth bathed in golden afternoon light. (Image generated by ChatGPT 5.2)

AI’s primary value isn’t replacing people, it’s releasing the intellectual capital trapped in undifferentiated work. Yet in many Boardrooms, workforce reduction remains the default success metric for AI initiatives. This article makes the case for the redeployment dividend: redirecting freed human capacity toward outcome-impacting work, complex judgement, and innovation that AI cannot replicate. For Boards, the strategic question shifts from “how many roles disappear?” to “what valuable work aren’t we doing because our best people are buried in tasks they don’t need to do?”


Return-to-Work Briefing: Five Forces Reshaping the Board AI Agenda in 2026

New York | Published in AI and Board | 10 minute read |    
Empty leather executive chair at the head of a polished boardroom table, five luminous streaks of light converging across the table surface toward an open briefing document and pen at the centre, stack of reports to one side. Dawn light breaks through clouds over a city skyline visible through floor-to-ceiling windows, casting warm golden and cool blue reflections across the scene  (Image generated by ChatGPT 5.2)

As we return to our desks for 2026, the AI forces demanding attention aren’t distant possibilities but strategic choices already in motion. AI is embedding itself into enterprise applications faster than organisations can govern it, whilst simultaneously eroding the human capabilities needed to oversee it. In this article I examine five of these forces — AI’s shift from content generation to decision support, inference economics reshaping deployment strategy, embodied AI introducing physical-world liability, verification gaps exposing governance failures, and AI governance professionalising into systematic capability.


The Year AI Grew Up: Five Inflections That Changed the Strategic Calculus in 2025

Washington DC | Published in AI and Board | 14 minute read |    
A sleek white humanoid robot sits among business executives in suits around a polished boardroom table, with documents and laptops before them and a city skyline bathed in golden sunrise light visible through floor-to-ceiling windows, symbolising AI's transition from experimental technology to strategic infrastructure with a seat at the Board table. (Image generated by ChatGPT 5.2)

In 2025 Boardrooms saw a collective shift in how they thought about AI’s role. What they spent 2023 and 2024 reacting to became a question of strategic investment in organisational infrastructure. They moved from “what can it do?” and “should we use it?” to “how do we navigate competing pressures and make this core to how we operate?” In this article, I examine the five interconnected inflections that drove this shift — and what they mean for Boards entering 2026.