Cookie Consent

I use cookies to understand how my website is used. This data is collected and processed directly by me, not shared with any third parties, and helps us improve our services. See my privacy and cookie policies for more details.

Tagged with: #leadership

Posts tagged with #leadership explore how executives can effectively guide their organisations through the complexities of digital evolution.

AI and the Director: A Practical Playbook for Governing What You Can't Fully See

London | Published in Board | 11 minute read |    
A figure in a dark suit, partially concealed behind a heavy charcoal velvet curtain, one hand gripping the curtain edge in sharp directional light against a black background — a visual metaphor for the unseen operator whose workings a director is expected to trust without seeing. (Image generated by ChatGPT 5.4)

The informational asymmetry between management and the Board has always been the central tension of governance. For AI, it is no longer manageable through existing structural checks; the distance is not merely larger than previous technology waves, it is qualitatively different. A director must be able to interrogate maturity claims, assess whether governance is operational or merely presentational, and identify which AI risks are personal development challenges and which are failures of oversight itself. The IoD has formally named the gap. This article defines what closing it actually requires: not technical fluency, but specific capacities for independent evaluation mapped against the governance obligations every director carries, and a diagnostic framework for identifying exactly where the work needs to start.


The Great Remaking: The Questions Boards Should Be Asking About Their AI Position

Llantwit Major | Published in AI | 10 minute read |    
Aerial view of a landscape as clouds gradually clear, with sunlight revealing the underlying terrain, representing how a board-level diagnostic cuts through activity metrics to expose the organisation’s true AI position (Image generated by ChatGPT 5.2)

The part of AI value that is technological and replicable is also the part that standard progress measures capture best. Pilot counts, budget lines, and strategy documents say nothing about whether the essence of work is genuinely being remade, or whether the three compounding loops are operating. A Board that accepts those reports without probing them is not exercising oversight; it is ratifying a narrative the evidence shows is inflated. This article provides the diagnostic that does: probing questions structured around the data, talent, and process redesign loops, with an interpretive guide to what credible answers look like — and what their absence reveals.


The Great Remaking: Why Fast Following Does Not Work When the Gap Compounds

Llantwit Major | Published in AI | 13 minute read |    
Aerial view of three large tidal whirlpools swirling in a warm golden coastal bay at sunset, surrounded by tree-lined shores and sandy beaches, representing the three self-reinforcing loops — data, talent, and process redesign — that compound the AI advantage gap over time (Image generated by ChatGPT 5.2)

Every previous technology wave rewarded fast followers. Identify what the leaders built, acquire or replicate it, close the gap. That logic fails for The Great Remaking — not because AI is different technology, but because the source of advantage is not a product that can be studied and replicated. It is operational accumulation: proprietary data shaped by AI-integrated workflows, human capability developed through sustained practice, and institutional knowledge embedded through iterative redesign. None of it can be purchased. All of it compounds with time. This article explains the three self-reinforcing loops that make the gap harder to close with every month an organisation defers the decision to redesign.


The Great Remaking: How the Four Dimensions of Work Are Transforming

Llantwit Major | Published in AI | 15 minute read |    
Four paths through landscapes at different stages of transformation converging into a single route, symbolising how thinking, deciding, creating, and delivering work evolve differently but remain part of the same system of work in the AI era (Image generated by ChatGPT 5.2)

AI is not remaking the four dimensions of the essence of work at the same speed, through the same mechanisms, or toward the same end state. Treating them as a single strategic question is the mistake most organisations are currently making. The organisations pulling ahead understand which dimensions are moving fastest in their sector, where redesign would produce the greatest compounding advantage, and what form of human value would survive in each case. This article goes dimension by dimension through the specific patterns of remaking that distinguish organisations building structural advantage from those still augmenting the status quo.


The AI Talent Bifurcation: Are You Building Skills or Collecting Credentials?

Llantwit Major | Published in AI and Board | 8 minute read |    
Skilled hands using a mallet and chisel to craft precise dovetail joints on a wooden frame in a traditional workshop, with quality woodworking tools laid out on a worn workbench, while a rough unfinished piece of wood with crude cuts sits nearby—same materials, different outcomes depending on capability and craft (Image generated by ChatGPT 5.2)

Workers with genuine AI capabilities command premiums of 28-56%; those targeting AI-exposed roles without substantive skill development face a 29% earnings penalty. The same roles, opposite outcomes, and the difference lies in the quality of capability investment, not access to tools. This article examines why this bifurcation extends to the Boardroom itself, where the IoD now positions AI competence as a core NED responsibility. For Boards, the strategic question becomes: is your workforce developing verification and judgement, or just collecting certifications — and can you tell the difference?


The Redeployment Dividend: Why AI Will Unleash Your People, Not Replace Them

Llantwit Major | Published in AI and Board | 9 minute read |    
Hands carefully transplanting young seedlings into rich soil inside a sunlit greenhouse, with a black seedling tray of fresh plants, a wooden-handled trowel, and gardening gloves resting nearby on warm earth bathed in golden afternoon light. (Image generated by ChatGPT 5.2)

AI’s primary value isn’t replacing people, it’s releasing the intellectual capital trapped in undifferentiated work. Yet in many Boardrooms, workforce reduction remains the default success metric for AI initiatives. This article makes the case for the redeployment dividend: redirecting freed human capacity toward outcome-impacting work, complex judgement, and innovation that AI cannot replicate. For Boards, the strategic question shifts from “how many roles disappear?” to “what valuable work aren’t we doing because our best people are buried in tasks they don’t need to do?”


Return-to-Work Briefing: Five Forces Reshaping the Board AI Agenda in 2026

New York | Published in AI and Board | 10 minute read |    
Empty leather executive chair at the head of a polished boardroom table, five luminous streaks of light converging across the table surface toward an open briefing document and pen at the centre, stack of reports to one side. Dawn light breaks through clouds over a city skyline visible through floor-to-ceiling windows, casting warm golden and cool blue reflections across the scene  (Image generated by ChatGPT 5.2)

As we return to our desks for 2026, the AI forces demanding attention aren’t distant possibilities but strategic choices already in motion. AI is embedding itself into enterprise applications faster than organisations can govern it, whilst simultaneously eroding the human capabilities needed to oversee it. In this article I examine five of these forces — AI’s shift from content generation to decision support, inference economics reshaping deployment strategy, embodied AI introducing physical-world liability, verification gaps exposing governance failures, and AI governance professionalising into systematic capability.


The Accountability Gap: When AI Delegation Meets Human Responsibility

New York | Published in AI and Board | 15 minute read |    
Senior executives observing a fast-moving automated conveyor belt of AI-generated business reports in a modern corporate office, with unused quality control tools in the foreground illustrating the AI accountability gap (Image generated by ChatGPT 5)

While organisations transfer decision-making agency to AI systems, accountability remains with humans, yet boards approve AI deployment without investing in the verification capability needed to ensure it. In this article, I demonstrate why this creates a strategic choice with measurable consequences: augmentation preserves expertise pipelines whilst achieving efficiency gains, but replacement destroys capabilities that cannot be rebuilt, turning apparent cost reduction into systematic competitive disadvantage.


After the AI Amnesty: Practical Steps to Operationalise Discovered Shadow AI

Llantwit Major | Published in AI and Board | 12 minute read |    
A corporate transformation scene showing AI tools transitioning from shadows into organised, illuminated workflows with visible governance frameworks and collaborative teams (Image generated by ChatGPT 5)

Following your AI amnesty programme, speed matters: employees who disclosed shadow AI usage expect enablement, not restriction - the post-amnesty window is critical. In this article, I provide a roadmap for transforming discoveries into governed capabilities that boost organisational productivity and reduce the risk of AI moving back into the shadows again.


Shadow AI and the Case for an AI Amnesty

Llantwit Major | Published in AI and Board | 15 minute read |    
A corporate office environment showing contrasting scenes: shadowy figures using AI tools in darkness on one side, while the other shows transparent, well-lit collaborative AI usage, symbolising the transformation from shadow AI to governed innovation (Image generated by AI)

With a 68% surge in shadow AI usage and 54% of employees saying they would use AI tools even if they were not authorised by the company, Boards face a governance challenge traditional compliance cannot solve. This article presents AI amnesty as an important first step to minimum lovable governance - transforming hidden risks into strategic assets whilst capturing employee-validated innovation. When 95% of enterprise AI pilots fail to deliver measurable ROI yet shadow AI thrives everywhere, the path forward isn’t enforcement but structured disclosure programmes that build trust and position early adopters as governance standard-setters.