Cookie Consent

I use cookies to understand how my website is used. This data is collected and processed directly by me, not shared with any third parties, and helps us improve our services. See my privacy and cookie policies for more details.

The Year AI Grew Up: Five Inflections That Changed the Strategic Calculus in 2025

Washington DC | Published in AI and Board | 14 minute read |    
A sleek white humanoid robot sits among business executives in suits around a polished boardroom table, with documents and laptops before them and a city skyline bathed in golden sunrise light visible through floor-to-ceiling windows, symbolising AI's transition from experimental technology to strategic infrastructure with a seat at the Board table. (Image generated by ChatGPT 5.2)

For Boards, 2025 brought the collective realisation that AI had stopped being a capability question and become a question of strategic investment in organisational infrastructure. Regulation became enforceable then adapted under geopolitical pressure; energy constraints moved from operational footnotes to strategic agendas; sovereignty fragmented into incompatible ecosystems; the experimentation window closed as pilots failed to reach production; and agentic AI hype met operational reality. Five interconnected inflections, each reinforcing the others. 2025 was the year AI grew up.

These five inflections that defined the year weren’t separate developments but interconnected forces that collectively transformed how Boards must think about AI. Readers who followed along throughout the year will recognise these themes. This piece connects those threads into a coherent picture of what changed and what it means.

The collective implication for Boards is profound: AI is now infrastructure, not innovation theatre. The organisations that recognised this shift in 2025 are positioned for 2026. Those still treating AI as experimental face intensifying competitive pressure — but there is a pathway to catching up.

Inflection 1: Accountability arrived — then didn’t

The first inflection point was the arrival of real accountability for AI governance, followed almost immediately by its recalibration.

On 2 February 2025, the EU AI Act’s prohibitions became enforceable — social scoring systems, manipulative AI, emotion recognition in workplaces. For the first time, specific AI applications faced outright bans with meaningful penalties. Six months later, on 2 August 2025, the General-Purpose AI obligations activated. Transparency requirements, risk assessments, and 10-year documentation retention became mandatory for foundation model providers. The penalty regime went live: fines of up to €35 million or 7% of global turnover for serious violations, as DLA Piper’s analysis confirmed. The AI Office became operational. Member States designated national competent authorities. For the first time, AI systems faced real consequences for governance failures.

Then came the recalibration.

On 19 November 2025, the European Commission introduced the Digital Omnibus proposal. The proposal deferred high-risk AI obligations from August 2026 to potentially December 2027 or later, as Cooley’s analysis detailed. It eased data usage restrictions for AI training, extended SME regulatory benefits to small mid-caps, and simplified technical documentation requirements.

This “enforce then relax” arc was not regulatory failure — it was the sovereignty trilemma manifesting in real-time. Within a single year, the EU discovered that optimising purely for trust creates competitive disadvantages against jurisdictions prioritising speed. The US deregulation acceleration under the Trump administration, with its explicit “AI Arms Race” positioning, created pressure that the original AI Act timeline could not withstand.

The strategic interpretation for Boards is clear: governance frameworks must be adaptive, not static. Compliance strategies built for the original timeline now face rework — but organisations that built flexible governance foundations can turn this shift into advantage. The regulation arc of 2025 validated that accountability matters whilst demonstrating that regulatory approaches must balance trust with competitive positioning.

The UK’s contrasting approach — sector-led regulation through the FCA, ICO, CMA, Ofcom, and MHRA, coordinated through the Digital Regulation Cooperation Forum — faces its own test. Without central AI legislation, UK organisations must navigate fragmented guidance whilst monitoring which regulatory stance ultimately proves most effective for enabling innovation whilst managing risk.

Inflection 2: Resource constraints became Board-level

The second inflection elevated energy from operational footnote to strategic imperative.

Leopold Aschenbrenner’s “Situational Awareness” essays first articulated the mathematics: trillion-dollar clusters requiring power equivalent to entire states. By late 2025, institutional validation had arrived. Goldman Sachs’ landmark research, “Powering the AI Era,” concluded that a lack of capital is not the most pressing bottleneck for AI progress — it’s the power needed to fuel it. Their projections crystallised the mathematics: data centre power demand is expected to rise +160% by 2030, with AI workloads driving the surge. In 2024, global hyperscaler capital expenditure reached approximately $800 million per day as the race for artificial general intelligence accelerated. The report notes that AI server racks consume ten times more power than traditional computing infrastructure, whilst data centre vacancy rates sit at record lows with new power at scale often not coming online until 2028 or beyond.

The IEA’s November 2025 commentary reinforced the European challenge stating that overcoming energy constraints is key to delivering on the continent’s data centre goals. Planning delays and grid connection backlogs are creating competitive bottlenecks that financial engineering cannot solve.

For UK businesses, the constraint reality is particularly acute. Energy costs — still approximately 75% higher than before Russia’s invasion of Ukraine — create a persistent competitive disadvantage I explored in my analysis of UK sovereignty challenges. The UK imports 12% of its electricity, whilst US hyperscalers are securing dedicated nuclear capacity through partnerships with companies like Talen Energy and even restarting shuttered reactors like Three Mile Island. These deals demonstrate hyperscalers treating energy access as strategic necessity, not operational detail. The strategic implication extends further: as I explored in my analysis of hyperscaler grid participation, surplus generation capacity transforms AI infrastructure operators from energy consumers into grid actors — a shift that reshapes the relationship between technology companies and national grids. These options are largely unavailable to UK organisations operating within more constrained infrastructure and regulatory regimes.

What changed in 2025 was not the existence of these constraints but their recognition as Board-level concerns. Goldman Sachs formed a Capital Solutions Group specifically for AI infrastructure needs. Joint ventures between pension funds, sovereign wealth funds, and data centre operators signal a new infrastructure financing paradigm. The financial markets now understand a critical reality: energy access determines AI capability.

The Board implications are significant. Compute access moved from IT procurement to capital allocation decisions. Infrastructure dependencies now carry sovereignty implications. Energy strategy and AI strategy became inseparable in 2025 — and that linkage is permanent. Organisations that recognised this early can build energy considerations into their strategic planning, turning infrastructure constraints into informed positioning rather than reactive scrambling.

Inflection 3: Sovereignty forced deliberate positioning

The third inflection revealed that AI governance has fragmented into incompatible ecosystems, forcing organisations to make deliberate strategic choices rather than attempting to serve all markets simultaneously.

The sovereignty trilemma I introduced crystallised across 2025: organisations can optimise for trust, speed, or control — but not all three simultaneously. This fragmentation compounds the resource constraints and regulatory adaptation described above, creating an environment where clarity of positioning becomes essential.

The US ecosystem optimised for speed through scale. The Trump administration’s AI policy explicitly prioritised deregulation, compute expansion, and innovation velocity over precautionary governance. Market rewards for energy-backed infrastructure strategies validated this positioning — Oracle’s reported $300 billion AI infrastructure agreement with OpenAI sent the company’s stock surging over 40% (it has now lost most of those gains).

The EU ecosystem optimised for trust through transparency — then recalibrated. When the GPAI Code of Practice required Model Documentation Forms with 10-year retention periods and systemic risk assessments for models exceeding 10^25 FLOPS, it established trust as the primary currency of AI value. Then November’s Digital Omnibus acknowledged the competitive drag this created (no doubt influenced by the 46 EU CEOs calling for a pause in the act). The EU’s €1.1 billion “Apply AI” strategy announced in October 2025 explicitly aimed to reduce reliance on US and China capabilities — acknowledging that trust-first positioning met speed-first competition.

China’s ecosystem optimised for control through integration. Open-source model exports served as geopolitical leverage, with DeepSeek and others creating strategic positioning that extends beyond commercial competition. The addition of 400GW of power capacity dwarfs Western additions, whilst tightening data localisation and asserting digital sovereignty creates controlled environments where AI development follows coordinated national priorities.

The “not choosing is choosing” reality became acute in 2025. Organisations attempting to serve all three ecosystems discovered they serve none well. A pharmaceutical company might prioritise trust for European drug approvals whilst needing speed for American market competition. A manufacturer could find their AI-powered quality control systems subject to different sovereignty requirements in each market. Every architectural decision — from model selection to data storage to compute location — now carries sovereignty implications that compound over time.

For UK organisations, the sovereignty fragmentation creates particular complexity. Without alignment to any single bloc, UK businesses must navigate relationships with all three whilst the domestic regulatory approach remains sector-led rather than unified. The strategic stances I outlined — principled standardisation, adaptive localisation, and sovereign specialisation — represent distinct choices rather than points on a spectrum. Organisations that made deliberate choices in 2025 can resource them appropriately and communicate them clearly to stakeholders. Those that allowed sovereignty positions to emerge through operational drift face 2026 with unclear positioning, but the frameworks for making these choices are now well-established.

Inflection 4: The experimentation window closed

The fourth inflection split organisations into two camps: those still experimenting with AI and those operating it at scale — with the gap between them widening every day.

The evidence mounted throughout 2025. MIT research provided the stark headline: 95% of generative AI pilots fail to reach production or deliver measurable ROI. Published in August and reported by Fortune, the finding sparked debate about methodology and measurement. Yet the research aligns with broader patterns across the industry — from RAND’s 70-85% failure estimates to Gartner’s prediction that 30% of GenAI projects will be abandoned by end-2025. Ethan Mollick and others argue that pilots are meant to fail fast and generate learning, which is precisely the point: even when failure is intentional, it requires systematic strategy to capture and scale that learning. The debate reinforced rather than undermined the core message — isolated, project-based AI adoption consistently underperforms strategic expectations. Gartner’s 2025 research added another dimension: 57% of organisations estimate their data isn’t AI-ready.

The shadow AI paradox sharpened the picture. While formal pilots failed, informal adoption surged. Menlo Security’s 2025 report found 90% of employees using AI daily outside enterprise controls. BCG’s AI at Work 2025 research confirmed 54% of employees willing to use unauthorised tools when corporate solutions fall short.

This is the GenAI Divide made concrete: innovation thriving in shadows whilst governed programmes stall. Shadow AI often represents successful informal pilots that formal programmes fail to replicate — revealing governance gaps rather than technology limitations. The opportunity here is significant: organisations that can harness shadow AI through amnesty programmes and minimum lovable governance can accelerate their path across the divide, turning ungoverned experimentation into validated capability.

The competitive consequences are now measurable. PwC’s Global AI Jobs Barometer found industries more exposed to AI showing 3x higher growth in revenue per employee. The gap between AI adopters and AI explorers widened into competitive differentiation that 2026 will only amplify.

What crossing the divide requires became clearer in 2025. Integration with core business processes, not isolated pilots. Workflow redesign, not tool deployment — BCG found that companies redesigning workflows see 67% of employees saving over an hour daily versus 49% for tool-only rollouts. Governance that enables rather than constrains. AI Centres of Excellence (AI CoE) with expanded mandates that connect shadow innovation to formal capability building.

The experimentation window closed not by executive decree but by competitive pressure. The pathway across the divide is now well-documented, and organisations ready to commit have clear models for success.

Inflection 5: Agentic AI hype collided with operational reality

The fifth inflection saw the year’s headline technology meet the governance reality that determines whether AI creates value or chaos. This inflection builds directly on the previous four: agentic systems require the regulatory clarity, energy access, sovereignty positioning, and production-ready infrastructure that the other inflections addressed.

Agentic AI dethroned generative AI as the technology everyone wanted to discuss. Gartner’s 2025 Hype Cycle positioned AI agents at the peak of inflated expectations. Investment flowed — the EU’s €1.1 billion “Apply AI” strategy included healthcare, manufacturing, and climate applications featuring agentic capabilities. Every vendor rebranded around “agentic” positioning.

The reality check arrived with data. McKinsey’s State of AI 2025 found that 88% of organisations use AI in at least one function, but only 23% have successfully scaled agentic systems enterprise-wide. Some 62% are experimenting with agents — a significant gap between experimentation and scaling. Gartner predicted that 40% of agentic AI projects could be cancelled by 2027 due to high costs and complexity.

Strip away the hype and the core concept is straightforward: agentic AI is generative AI in a loop, where machines drive iteration instead of humans. The strategic question isn’t technological sophistication but where to consciously transfer decision-making authority from people to systems. This framing transforms an overwhelming technology conversation into a manageable governance decision.

Compound loops that integrate multiple AI disciplines — machine learning, computer vision, natural language processing, robotic process automation — offer paths beyond single-discipline limitations. But compound capability requires compound governance. Organisations that failed to cross the GenAI Divide have no foundation for agentic deployment. Agentic systems inherit and amplify governance gaps from underlying AI.

The question “where do we give AI agency?” requires Board-level answers, not IT decisions. 2025 clarified that agentic AI requires the same governance discipline as any capability — plus explicit decisions about authority transfer. Minimum lovable governance becomes essential: just enough structure to demonstrate good faith oversight whilst preserving the agility and speed that make agentic AI valuable. Organisations that built this foundation in 2025 are positioned to capture agentic value; those that didn’t have a clear roadmap for getting there.

The collective message for Boards

These five inflections share a common thread: AI stopped being a capability question and became an infrastructure question. The shift is from “should we use AI?” to “how do we resource and govern it as core business architecture?”

This demands different disciplines: capital allocation rather than project approvals, long-term commitment rather than pilot funding, governance as enablement rather than compliance checkbox, sovereignty as deliberate strategy rather than operational drift.

The organisations that recognised this transition in 2025 built AI CoEs with Board-level mandates, made deliberate sovereignty choices, invested in data readiness before model sophistication, and treated energy strategy and AI strategy as inseparable. They discovered that systematic governance creates compound advantage — each capability building on the last, each constraint transformed into strategic clarity.

Those that didn’t face 2026 needing to build foundations whilst others already operate at scale. But the pathway is clear: the frameworks exist, the patterns are documented, and the successful ones have demonstrated what works. Catching up requires commitment, not invention.

The inflections of 2025 changed the strategic calculus permanently. The question for every Board is whether they’re ready to treat AI as infrastructure rather than novelty — and the opportunity to do so remains wide open.

Let's Continue the Conversation

Thank you for reading this synthesis of 2025's five defining AI inflections. I'd welcome hearing about your Board's experience navigating these shifts — whether you're recalibrating compliance strategies following the EU's regulatory adaptation, wrestling with energy and sovereignty considerations in your AI infrastructure decisions, working to cross the GenAI Divide from experimentation to production, or building governance frameworks for agentic AI deployment.