Mario Thomas

Monthly Newsletter | February 2026

The Human Factor in AI Governance

As we enter February 2026, a clear theme emerges from January's articles: the human element remains central to AI success. Whilst technology advances rapidly, the organisations capturing genuine value are those investing in people — their capabilities, their redeployment, and their judgement.

This month's collection begins with a return-to-work briefing examining five forces reshaping Board AI agendas: the shift from content generation to decision support, inference economics, embodied AI liability, verification gaps, and governance professionalisation. From there, we explore why AI's primary value isn't replacing people but releasing intellectual capital trapped in undifferentiated work — what I call the redeployment dividend. The talent bifurcation article reveals a striking disparity: workers with genuine AI capabilities command premiums of 28-56%, whilst those targeting AI-exposed roles without substantive skill development face a 29% earnings penalty.

The verification premium piece draws on my own coding experiment to demonstrate why classical training matters more than ever when AI writes the code — expertise doesn't become less relevant with AI assistance; it becomes the determining factor in outcomes.

If your time is limited, I particularly recommend the talent bifurcation article for its practical implications on workforce investment, and the redeployment dividend piece for reframing how Boards should measure AI success.

How is your organisation balancing AI capability building with the verification expertise needed to ensure quality outcomes?

- Mario

January's Strategic Insights

Empty leather executive chair at the head of a polished boardroom table with five luminous streaks of light converging across the table surface toward an open briefing document

Return-to-Work Briefing: Five Forces Reshaping the Board AI Agenda in 2026

Published 4 January 2026 | 10 minute read

As we return to our desks for 2026, the AI forces demanding attention aren't distant possibilities but strategic choices already in motion. This article examines five forces: AI's shift from content generation to decision support, inference economics reshaping deployment strategy, embodied AI introducing physical-world liability, verification gaps exposing governance failures, and AI governance professionalising into systematic capability.

Read Article
Hands carefully transplanting young seedlings into rich soil inside a sunlit greenhouse with gardening tools nearby

The Redeployment Dividend: Why AI Will Unleash Your People, Not Replace Them

Published 11 January 2026 | 9 minute read

AI's primary value isn't replacing people — it's releasing the intellectual capital trapped in undifferentiated work. Yet in many Boardrooms, workforce reduction remains the default success metric for AI initiatives. This article makes the case for redirecting freed human capacity toward outcome-impacting work, complex judgement, and innovation that AI cannot replicate.

Read Article
Skilled hands using a mallet and chisel to craft precise dovetail joints on a wooden frame in a traditional workshop

The AI Talent Bifurcation: Are You Building Skills or Collecting Credentials?

Published 18 January 2026 | 8 minute read

Workers with genuine AI capabilities command premiums of 28-56%; those targeting AI-exposed roles without substantive skill development face a 29% earnings penalty. The same roles, opposite outcomes — and the difference lies in the quality of capability investment, not access to tools. This bifurcation extends to the Boardroom itself, where the IoD now positions AI competence as a core NED responsibility.

Read Article
Desktop setup with reMarkable Paper Pro, MacBook Air M2, and coding tools in late-night lighting

The Verification Premium: What Classical Training Reveals About AI Coding Costs

Published 25 January 2026 | 13 minute read

AI coding tools don't close the expertise gap — they amplify it. Research shows senior developers capture twice the productivity gains of juniors, whilst a randomised controlled trial found experienced developers actually worked slower with AI than without, the hidden taxes of verification offsetting initial speed. This article explores why Boards should ask not 'can we use AI to write code cheaper?' but 'do we have the verification capability to ensure AI-generated code creates value rather than debt?'

Read Article
LinkedIn X GitHub YouTube Quora Reddit Medium Pinterest Telegram RSS SubStack
Terms | Privacy | Cookies

Copyright © 2026 Mario Thomas. All rights reserved.

You are receiving this email because you signed up to receive it on mariothomas.com on {{date_subscribed}} at {{time_subscribed}}. You can unsubscribe at any time.