Return-to-Work Briefing: Five Forces Reshaping the Board AI Agenda in 2026

As we settle into the first working week of 2026, I want to examine the forces I believe will shape the Board AI agenda this year. Five demand particular attention: AI’s shift from content generation to decision support, inference economics reshaping deployment strategy, embodied AI introducing physical-world liability, verification gaps exposing governance failures, and AI governance professionalising into systematic capability.
Gartner projects that 40% of enterprise applications will feature AI agents by year-end, up from less than 5% in 2025. In the same timeframe, half of all organisations will introduce “AI-free assessments” to counter the critical-thinking atrophy that AI reliance has created. AI is embedding itself faster than organisations can govern it, whilst simultaneously eroding the human capabilities needed to oversee it.
This tension captures the challenge Boards face. The luxury of treating AI as tomorrow’s problem — something to monitor rather than govern — has ended. These aren’t distant possibilities but forces already in motion, arriving on agendas regardless of preparedness.
Force 1: AI shifts from generating to deciding
The dominant AI narrative of 2023 to 2025 focused on generation: content creation, image synthesis, code completion. That story has shifted, with the inflection toward decision support underway — AI is moving from content creator to decision partner.
World models are reaching narrow commercial viability in domains where prediction creates decisive advantage. Aviation, supply chain, and financial forecasting represent the vanguard, with BCG’s AI-First Airline report documenting 20-40% operational efficiency improvements through predictive systems. These aren’t incremental gains from better chatbots; they represent AI anticipating disruption before it cascades through networks, predicting equipment failure before it causes delays, and optimising crew allocation across scenarios that human planners cannot compute in real time.
As I explored in my world models article last November, the 2-3 year timeline for narrow commercial applications positions 2026 precisely at this inflection point. Boards that built predictive indicator capabilities with current AI are now positioned to adopt these more sophisticated systems. Those still debating whether to approve their first chatbot deployment find themselves strategically behind before the year begins.
Boards asking “have we deployed AI?” are measuring the wrong thing. The question that matters is whether AI is informing decisions that shape competitive position. Content generation became table stakes; decision support is now the differentiator.
Board question: Is our AI strategy still focused on content generation, or have we pivoted toward decision support?
Force 2: The economics of inference reshape AI strategy
The ‘bigger is better’ orthodoxy that dominated AI discourse is meeting economic reality. Capability-per-dollar becomes the metric that matters, not parameter counts or training costs.
Deloitte’s 2026 TMT Predictions reveals that inference — running AI models — will account for two-thirds of all AI computing power by 2026. Training a model happens once; running it happens continuously. And despite forecasts to the contrary, most inference will still take place in data centres and on-premises enterprise servers using costly, power-intensive chips — not at the edge on inexpensive alternatives. Organisations that optimised for training costs whilst ignoring inference economics discover they’ve built capabilities they cannot afford to operate at scale.
Small Language Models offer a potential recalibration of this equation. Faster inference, lower costs, easier deployment, and reduced energy footprint come at the price of narrower capability. Vertical and industry-specific models provide higher accuracy within defined domains, more predictable behaviour, and simpler compliance profiles. As I discussed in my LLM selection guidance, the choice isn’t simply “which model is most powerful” but “which model delivers required capability at sustainable cost.”
The carbon question compounds these economics in ways Boards aren’t yet asking. Scope 2 emissions from AI inference are rarely measured and almost never reported, yet the energy footprint of continuous model operation may prove material as sustainability reporting requirements tighten. Whether AI inference eventually warrants its own reporting category or demands better attribution within existing scopes, the uncomfortable truth is that most organisations cannot answer the question today — AI energy consumption remains invisible in their sustainability reporting. The interplay between AI ambition and energy sovereignty creates strategic constraints that technology teams alone cannot navigate.
Board question: Do we know the carbon footprint of our AI inference, and would our stakeholders be satisfied with the answer?
Force 3: Embodied AI enters the risk register
AI governance to date focused primarily on decisions, outputs, and data. 2026 introduces a different category: AI that acts in the physical world, bringing consequences that can’t be retracted with a software update.
The question changes from “what did the AI decide?” to “what did the AI do?” Previous governance frameworks assumed AI would recommend and humans would act. Embodied systems collapse that distinction. Autonomous systems are becoming standard in manufacturing, logistics, and retail environments that demand sub-millisecond response times, edge-to-cloud orchestration, and continuous operation without human supervision. The capital markets are pricing this transition aggressively: Elon Musk’s September declaration that "~80% of Tesla’s value will be Optimus" reflects investor appetite for humanoid robotics that Wall Street analysts now estimate as a $5-7 trillion market by 2050. Meanwhile, Deloitte’s TMT Predictions notes that whilst annual industrial robot sales have stalled at around half a million units since 2021, an inflection point by 2030 could see shipments double to one million annually, driven by labour shortages and exponential advances in computing power and specialised AI models.
The liability implications are substantial. Gartner’s Strategic Predictions forecast that “death by AI” legal claims will exceed 2,000 by end of 2026 due to insufficient risk guardrails. The EU Product Liability Directive, effective December 2026, enables compensation claims for defective AI products, creating statutory obligations that Boards cannot delegate away. Insurance coverage questions multiply: does existing product liability cover AI-caused damage? Do professional indemnity policies extend to AI-assisted decisions? Are exclusions buried in policy language that risk committees haven’t examined?
My agentic AI explainer outlined how organisations should understand these systems. The governance challenge now is translating that understanding into risk registers, insurance reviews, and Board oversight mechanisms that can address physical-world consequences rather than purely digital outputs.
Board question: If our AI causes physical harm, who is liable and how are we insured?
Force 4: The verification imperative
As AI capability increases, expertise required to verify outputs increases proportionally. This creates an uncomfortable asymmetry: the organisations deploying AI most aggressively may be least equipped to verify what their systems produce.
Stanford HAI research demonstrates the scale of this challenge. General-purpose LLMs hallucinate on legal queries between 58% and 82% of the time. This isn’t a marginal error rate that human review can catch; it’s a fundamental reliability problem that requires subject matter expertise to identify. Yet most organisations deploying AI-assisted legal research have not correspondingly invested in the legal expertise needed to verify outputs.
The pattern extends beyond legal applications. Wherever AI outputs sound authoritative — financial projections, medical summaries, strategic analyses — the same risk applies. Without domain expertise to interrogate these outputs, organisations cannot distinguish insight from hallucination. They’ve deployed capability without building verification, creating what I’ve called the accountability gap: responsibility for AI outputs remains with humans who lack the capability to fulfil that responsibility.
The 2025 AI Governance Survey conducted by Pacific AI found that whilst 75% of organisations have established AI usage policies, only 36% have adopted a formal governance framework. The gap between having policies on paper and having operational oversight in practice illustrates a broader pattern: Boards approve AI deployment without approving corresponding verification investment, creating a maturity mirage where deployment metrics suggest progress whilst verification gaps create exposure.
Gartner’s prediction that 50% of organisations will introduce AI-free assessments to counter critical-thinking atrophy addresses half this challenge. The other half is ensuring organisations retain sufficient expertise to verify what AI produces, not just to think without AI assistance.
Board question: Have we invested in verification capability proportionate to our AI deployment?
Force 5: AI governance professionalises
AI governance is moving from ad-hoc oversight to systematic capability. Structures, roles, and competencies that were optional in 2025 become baseline expectations in 2026.
Chief AI Officers are emerging as distinct executive roles, separate from Chief Data Officers and Chief Technology Officers, accountable for AI strategy, governance, and value realisation. The distinction matters: CDOs focus on data as asset; CTOs focus on technology infrastructure; CAIOs focus on AI as business capability that spans both whilst introducing unique governance requirements that neither traditional role encompasses.
Board-level AI literacy is following a similar trajectory. The Institute of Directors’ Business Paper on AI Governance in the Boardroom provides practical guidance for directors navigating oversight obligations, presenting twelve principles updated for 2025 that integrate new legislation and boardroom realities. Separately, the EU AI Act places AI Literacy on statutory footing for all AI systems regardless of risk level, creating legal requirements where professional expectations previously sufficed. “I don’t understand AI” becomes as unacceptable for directors as “I don’t understand our finances” or “I don’t read the accounts.”
The AI Centre of Excellence (AI CoE) moves from optional innovation accelerator to essential governance infrastructure. As I explored in my AI CoE series, organisations need systematic coordination of AI deployment, governance, and value realisation rather than project-by-project oversight that creates fragmentation.
Gartner predicts that 75% of hiring processes will include AI proficiency certifications and testing by 2027, extending governance expectations to workforce capability. Boards must consider not just whether the organisation can deploy AI, but whether its people can work alongside it responsibly — and whether governance structures can oversee both.
This professionalisation creates opportunity for organisations that have built systematic governance capability and exposure for those still treating AI as technology project rather than strategic transformation.
Board question: Does our governance structure reflect AI’s strategic importance, or are we still treating it as a technology project?
The convergence ahead
These five forces interconnect in ways that amplify their individual impact. The capability shift from generation to decision support intensifies verification requirements whilst creating new liability exposures. Inference economics constrain deployment options while embodied AI expands the risk surface. Governance professionalisation provides frameworks to navigate all of the above, but only for organisations that invest in building that capability.
The workforce implications deserve particular attention. As AI takes on more decision-making responsibility, verification becomes essential, and organisations face a duty of care question: how do they ensure employees benefit from AI-augmented work rather than being displaced by it? The answer lies partly in governance structures that treat workforce transition as strategic priority, not afterthought — building verification expertise across functions at different speeds, creating human-AI collaboration frameworks, and ensuring productivity gains translate into employee opportunity rather than purely shareholder returns.
2026 is the year Boards can no longer delegate AI to technologists. The forces arriving demand Board-level attention. They determine competitive positioning, risk exposure, regulatory compliance, and stakeholder confidence in ways that technology teams alone cannot address.
Boards that defer don’t avoid these decisions; they make them by default. Inaction is itself a choice, one that accepts whatever AI trajectory operational teams determine without strategic direction. The forces described here will reshape organisations regardless of Board engagement; the question is whether that reshaping reflects deliberate strategy or accumulated drift.
The challenges are substantial but manageable. What I’ve called minimum lovable governance — building just enough structure to ensure responsible deployment whilst preserving agility — offers a path between paralysis and recklessness. Organisations with governance structures in place, verification capability proportionate to deployment, and Board-level understanding of AI implications will navigate 2026 from a position of strength. Those without will spend the year reacting to consequences they didn’t anticipate from deployments they didn’t fully understand.
Let's Continue the Conversation
Thank you for reading about the five forces reshaping the board AI agenda in 2026. I'd welcome hearing about your Board's experience navigating these challenges - whether you're grappling with the shift from content generation to decision support, wrestling with inference economics and carbon footprint questions, addressing embodied AI liability gaps, building verification capability proportionate to deployment, or professionalising your AI governance structures. Which force feels most urgent for your organisation as 2026 begins?




