I use cookies to understand how my website is used. This data is collected and processed directly by me, not shared with any third parties, and helps us improve our services. See my privacy and cookie policies for more details.
Tagged with: #board-governance
Posts tagged with #board-governance present thought-leadership on structuring your governance approach to match the velocity of AI-driven decisions while maintaining robust accountability and transparency.
Llantwit Major |
Published in
AI
and
Board
| 8 minute read |
Workers with genuine AI capabilities command premiums of 28-56%; those targeting AI-exposed roles without substantive skill development face a 29% earnings penalty. The same roles, opposite outcomes, and the difference lies in the quality of capability investment, not access to tools. This article examines why this bifurcation extends to the Boardroom itself, where the IoD now positions AI competence as a core NED responsibility. For Boards, the strategic question becomes: is your workforce developing verification and judgement, or just collecting certifications — and can you tell the difference?
Llantwit Major |
Published in
AI
and
Board
| 9 minute read |
AI’s primary value isn’t replacing people, it’s releasing the intellectual capital trapped in undifferentiated work. Yet in many Boardrooms, workforce reduction remains the default success metric for AI initiatives. This article makes the case for the redeployment dividend: redirecting freed human capacity toward outcome-impacting work, complex judgement, and innovation that AI cannot replicate. For Boards, the strategic question shifts from “how many roles disappear?” to “what valuable work aren’t we doing because our best people are buried in tasks they don’t need to do?”
New York |
Published in
AI
and
Board
| 10 minute read |
As we return to our desks for 2026, the AI forces demanding attention aren’t distant possibilities but strategic choices already in motion. AI is embedding itself into enterprise applications faster than organisations can govern it, whilst simultaneously eroding the human capabilities needed to oversee it. In this article I examine five of these forces — AI’s shift from content generation to decision support, inference economics reshaping deployment strategy, embodied AI introducing physical-world liability, verification gaps exposing governance failures, and AI governance professionalising into systematic capability.
Washington DC |
Published in
AI
and
Board
| 14 minute read |
In 2025 Boardrooms saw a collective shift in how they thought about AI’s role. What they spent 2023 and 2024 reacting to became a question of strategic investment in organisational infrastructure. They moved from “what can it do?” and “should we use it?” to “how do we navigate competing pressures and make this core to how we operate?” In this article, I examine the five interconnected inflections that drove this shift — and what they mean for Boards entering 2026.
Llantwit Major |
Published in
ai
and
board
| 8 minute read |
Forty-two percent of companies abandoned the majority of their AI initiatives this year — not because AI failed, but because organisations applied generative AI to problems better solved by traditional machine learning or deterministic automation. This article examines the recalibration underway as sophisticated adopters discover that LLMs excel at specific tasks but prove expensive and unreliable when mismatched to problem domains. For Boards, this shift presents an opportunity to right-size investments through hybrid architectures that match capabilities to problems, capturing value through strategic deployment rather than universal LLM adoption.
London |
Published in
AI
and
Board
| 9 minute read |
America’s 19GW power shortfall by 2028 is forcing hyperscalers to build their own generation, but the strategic insight is what happens next: surplus capacity transforms AI infrastructure operators from energy consumers into grid actors. This article examines how distributed generation reshapes the relationship between technology companies and national grids, exploring whether the UK’s smaller system enables transformation or creates concentration risk. For Boards, this evolution demands governance frameworks that address not just AI deployment but grid participation — before the transition forces answers upon them.
Llantwit Major |
Published in
AI
and
Board
| 11 minute read |
Boards frequently overestimate AI maturity by focusing on tool deployments rather than genuine capabilities, mistaking isolated pilot successes for systemic organisational readiness. This article exposes the three patterns that create the illusion—tool-centric thinking, pilot success traps, and hype-driven metrics—and provides a diagnostic framework to reveal true position and enable targeted advancement.
London |
Published in
AI
and
Board
| 13 minute read |
Minimum lovable governance marks a shift from episodic compliance scrambles to continuous, embedded oversight that people actually want to use. In this article I explain how governance can achieve necessary guardrails whilst earning adoption rather than resistance — like an arbour that guides growth without constraining it. For Boards, minimum lovable governance presents a practical path: the operating principle that makes AI governance work when traditional approaches simply get routed around.
Llantwit Major |
Published in
AI
,
Board
and
Emerging
| 10 minute read |
World models mark AI’s shift toward true predictive power, allowing systems to simulate future scenarios and help businesses move from reacting to events to anticipating them. Drawing on emerging research, including Yann LeCun’s work on simulation-based intelligence, this article highlights the practical gains industries like aviation and finance are seeing in operational efficiency through these future-looking tools. For Boards, world models present a tantalising future: the opportunity to turn future insight into present advantage.
New York |
Published in
AI
and
Board
| 15 minute read |
While organisations transfer decision-making agency to AI systems, accountability remains with humans, yet boards approve AI deployment without investing in the verification capability needed to ensure it. In this article, I demonstrate why this creates a strategic choice with measurable consequences: augmentation preserves expertise pipelines whilst achieving efficiency gains, but replacement destroys capabilities that cannot be rebuilt, turning apparent cost reduction into systematic competitive disadvantage.