Cookie Consent

I use cookies to understand how my website is used. This data is collected and processed directly by me, not shared with any third parties, and helps us improve our services. See my privacy and cookie policies for more details.

From AI Pilots and Projects to AI Strategy: Avoiding the Business Case Trap

Sydney | Published in AI and Board | 10 minute read |    
Multiple small groups of musicians scattered across a grand concert hall, each playing different pieces of music simultaneously, creating fragmentation despite individual excellence (Image generated by ChatGPT 5)

McKinsey’s 2025 research shows 92% of companies planning to increase AI investments over the next three years, yet only 1% have reached mature status where AI drives substantial outcomes. Their State of AI analysis reveals that just 21% of organisations have redesigned workflows to integrate AI properly. This stark disconnect tells us something important: organisations are approving AI initiatives at record pace but failing to connect them into coherent capability. They’re collecting AI projects like stamps – each valuable in isolation, none adding up to competitive advantage. This chasm between investment ambition and integration reality reveals a fundamental misconception plaguing boardrooms: the belief that accumulating AI business cases constitutes AI strategy.

Every quarter, Boards approve another round of AI initiatives. Each comes with its own compelling business case – cost savings here, efficiency gains there, competitive catch-up everywhere. Yet despite this flurry of approvals, organisations find themselves with fragmented capabilities, governance gaps, and a gnawing sense that their AI efforts aren’t adding up to competitive advantage. The problem isn’t the individual business cases. It’s mistaking them for strategy.

The seductive logic of the business case

Business cases serve a vital purpose in corporate governance. They provide structured justification for investment, quantify expected returns, and create accountability for outcomes. Earlier this year, I explored how to build compelling AI business cases that move beyond simple ROI calculations to capture broader value creation. That approach remains essential – Boards need rigorous evaluation methods for AI investments. For decades, this project-by-project approval mechanism has served Boards well in governing technology investments. Apply the same rigour to AI, the thinking goes, and success should follow.

This logic appears sound until we examine what business cases actually measure. A business case is fundamentally a tool of isolation – it evaluates a single initiative against a narrow set of criteria, typically focusing on direct financial returns within a specific timeframe. It asks whether this particular investment will generate sufficient value to justify its cost. It doesn’t ask whether this investment reinforces or undermines other initiatives, whether it builds systematic capability, or whether it addresses the governance challenges that will determine long-term success. Even the most sophisticated business case, capturing both financial and non-financial value, cannot address these systemic questions.

Consider how a typical AI business case unfolds in the boardroom. The marketing team presents a compelling case for an AI-powered customer segmentation tool, projecting 15% improvement in campaign effectiveness. The operations team separately justifies an AI system for supply chain optimisation, promising 20% reduction in inventory costs. The HR department champions an AI recruitment platform to reduce time-to-hire by 30%. Each case, viewed in isolation, appears compelling. Each gets approved.

Six months later, the organisation discovers these “successful” pilots have created three different data governance models, incompatible technical standards, and conflicting approaches to AI ethics. The marketing team’s segmentation tool uses customer data in ways that violate the privacy framework the legal team is developing. The supply chain system’s efficiency gains come at the expense of transparency that the board requires for ESG reporting. The recruitment platform’s bias detection methods contradict the fairness principles adopted by other departments.

When success becomes failure

The fragmentation runs deeper than technical incompatibility. S&P Global’s 2025 analysis found that 42% of businesses scrapped most of their AI initiatives, up from just 17% the previous year. These weren’t technical failures – the pilots often delivered their promised functionality. They were strategic failures, casualties of an approach that optimises parts whilst ignoring the whole.

AIMultiple’s research into fragmented AI adoption reveals how separate pilots create cascading governance failures. When organisations pursue disconnected initiatives across departments – diagnostics here, operations there, customer service elsewhere – they inadvertently create multiple, incompatible governance frameworks. Each pilot might succeed on its own terms whilst undermining enterprise-wide coherence. Data fragmentation from isolated projects yields biased outputs and incomplete analysis, eroding overall value rather than building it.

The pattern repeats across industries. Organisations with separate AI initiatives for different functions find themselves unable to answer basic governance questions: What data are we using? What are our ethical boundaries? How do we measure success? They haven’t built AI governance – they’ve accumulated project-level decisions that actively work against each other.

The failure rate debate

The much-discussed “95% failure rate” for AI initiatives, cited by MIT, has sparked important debate about how we measure AI success. Whilst some question the methodology based on structured interviews with representatives from 52 organisations plus survey responses from 153 senior leaders, the research aligns with broader patterns seen across the industry – from RAND’s 70-85% failure estimates to Gartner’s prediction that 30% of GenAI projects will be abandoned by end-2025. Ethan Mollick and others argue that pilots are meant to fail fast and generate learning, which is precisely the point: even when failure is intentional for learning, it still requires systematic strategy to capture and scale that learning.

What this debate illuminates is more important than any specific percentage. Whether the failure rate is 95%, 70%, or 30%, the pattern remains consistent: isolated, project-based AI adoption consistently underperforms strategic expectations. The 70-95% range cited across multiple studies suggests most AI pilots fail to reach production, regardless of the precise figure. What matters is understanding why these failures occur. It’s not about technical capability or even organisational readiness. It’s about the fundamental mismatch between how AI creates value and how business cases measure it.

AI’s value compounds through network effects, data synergies, and capability building. A customer service chatbot becomes more valuable when it shares learning with the sales system. Predictive maintenance algorithms improve when they can access quality data from across the enterprise. AI governance frameworks become more robust when they’re applied consistently rather than reinvented for each use case. None of these compound benefits appear in individual business cases.

The AI-washing trap

The pressure to “do something” with AI has created a more insidious problem: AI-washing. Boards, feeling the weight of investor expectations and competitive anxiety, approve AI initiatives not because of compelling business cases but because not having AI seems like strategic negligence. McKinsey’s latest research shows that whilst 78% of organisations use AI in at least one function, less than one-third follow basic scaling practices like roadmaps or consistent KPIs.

This reactive approval pattern leads organisations down a familiar path. They deploy AI quickly to meet market expectations but struggle to scale beyond pilots. They announce AI initiatives to stakeholders whilst wrestling with governance gaps behind the scenes. They pursue transformation through accumulation rather than integration, hoping that enough individual successes will somehow cohere into strategic advantage.

The disconnect is understandable but costly. Boards apply the governance frameworks that have served them well for other technology investments – rigorous business case evaluation, staged investment gates, project-level accountability. These tools remain valuable, but AI requires additional consideration. Unlike traditional IT deployments that can succeed in isolation, AI creates value through network effects, shared learning, and compound capabilities. A merger wouldn’t proceed without integration planning; a new market entry would demand coordinated go-to-market planning. AI transformation requires similar systematic thinking, recognising it as a fundamental shift in how organisations create and capture value, not simply another technology upgrade.

The velocity mismatch

Traditional board governance operates on quarterly reviews and annual strategic planning cycles. Business cases fit neatly into this rhythm – propose, evaluate, approve, monitor. AI development moves at an entirely different velocity. The capability that justified a business case in January may be obsolete by June. The vendor that seemed strategic in Q1 might be disrupted by an open-source alternative in Q3.

This pace difference creates a real problem. Boards need to move faster to keep pace with AI evolution, but moving faster with project-by-project approvals only accelerates fragmentation. Each emergency approval, each reactive investment to “catch up” with competitors, adds another incompatible piece to an increasingly discordant ensemble. Organisations find themselves running harder whilst falling further behind, not because they lack AI initiatives but because those initiatives don’t harmonise into sustainable capability.

McKinsey’s research starkly illustrates this challenge: only 19% of organisations currently see more than 5% of revenue from AI, yet 87% expect to reach this threshold within three years. This optimism gap reflects a fundamental misunderstanding. These organisations believe that doing more of the same – approving more business cases, launching more pilots – will somehow yield different results. They’re mistaking velocity of activity for strategic progress.

Shadow AI and the governance paradox

Perhaps the most damning indictment of the business case approach comes from an unexpected source: shadow AI. As I’ve explored in my articles on AI amnesty programmes and their implementation, employees across organisations are quietly adopting AI tools without formal approval, often achieving better results than official pilots. Marketing teams use ChatGPT for content creation. Analysts employ Claude for research synthesis. Developers integrate GitHub Copilot without waiting for IT approval.

This shadow AI often delivers immediate value precisely because it bypasses the bureaucracy that strangles official initiatives. It spreads organically through teams that see practical benefits. It adapts quickly to changing needs without committee approval. It succeeds where formal business cases fail, raising an uncomfortable question: if ungoverned AI creates more value than governed initiatives, what exactly are Boards governing for?

The answer points to a fundamental gap in project-level thinking. Shadow AI may deliver tactical wins, but it also creates systemic risks – from data leakage to compliance violations to inconsistent customer experiences. My previous analysis of shadow AI governance highlighted these tensions. The solution isn’t to crack down on shadow AI or to abandon governance. It’s to recognise that current governance frameworks, built around individual business cases, are fundamentally misaligned with how AI creates value and risk. What shadow AI’s success really tells us is that our governance structures need strategic overhaul, not tactical adjustment.

The strategic imperative

True AI strategy requires a different lens entirely. Rather than asking “does this project make sense?”, Boards must ask “how does this initiative contribute to systematic AI capability?” Instead of evaluating returns in isolation, they must consider compound effects and strategic dependencies. Rather than approving projects, they must govern transformation.

This doesn’t mean abandoning financial rigour or accountability. It means recognising that AI value creation follows different patterns than traditional technology investments. Network effects mean early initiatives may show poor returns whilst laying essential foundations. Capability building requires accepting lower efficiency in the short term to achieve transformation in the long term. Governance frameworks need upfront investment that won’t show returns until multiple use cases are deployed.

Strategy, as Richard Rumelt reminds us in Good Strategy Bad Strategy, consists of three elements: diagnosis of the challenge, guiding policy to address it, and coherent actions that work together. Business cases, by their nature, skip diagnosis and policy to jump straight to isolated actions. They assume the challenge is simply “we need AI” and the policy is “approve good projects”. This assumption is fundamentally wrong.

The real challenge facing Boards isn’t whether to adopt AI – that question has been answered by the market. The challenge is how to govern AI adoption in ways that build systematic capability whilst managing compound risks. This requires diagnostic frameworks that reveal the true governance challenges, guiding policies that align initiatives with strategic intent, and coherent actions that reinforce rather than undermine each other.

Beyond the business case

Organisations that successfully navigate AI transformation share a common characteristic: they’ve moved beyond project-level thinking to embrace systematic strategy. They still evaluate individual initiatives – using frameworks like those I outlined in my business case series to capture full value potential – but within a strategic framework that considers alignment, capability building, and governance coherence. They recognise that AI transformation isn’t a series of independent projects but an interconnected system where success depends on how the pieces work together.

This shift requires Boards to fundamentally reimagine their role in AI governance. Rather than serving as approval gates for individual business cases, they must become orchestrators of systematic transformation – ensuring each initiative plays its part in a larger composition. Rather than asking “what’s the ROI?” for each initiative, they must ask “how does this build our AI capability?” Rather than managing a portfolio of projects, they must govern an ecosystem of interconnected capabilities.

The evidence is clear. Organisations pursuing AI through disconnected business cases achieve, at best, incremental improvements that don’t compound into competitive advantage. At worst, they create fragmented capabilities, governance chaos, and strategic confusion that actually destroys value. The path forward doesn’t lie in better business cases or more rigorous project evaluation. It lies in recognising that business cases, no matter how well crafted, are not strategy.

The question facing Boards isn’t whether they need AI strategy – they do. The question is whether they’ll continue mistaking the tactical justification of individual projects for the strategic orchestration of systematic transformation. Those that make this distinction will build sustainable AI capability. Those that don’t will find themselves with an expensive collection of pilots that never quite add up to transformation, wondering why their careful project-by-project approach yielded such disappointing results.

But recognising that business cases aren’t strategy is only the first step. True strategy, as Rumelt teaches us, begins with diagnosis – understanding the real challenge we face. The next article in this series examines what that diagnosis reveals: six fundamental concerns that Boards must address to govern AI effectively. Whilst I’ve previously explored Board priorities for AI governance, the diagnostic lens reveals these concerns not as a checklist but as an interconnected system that explains why project-level thinking fails. Because once we move beyond the business case trap, we need to properly diagnose what we’re actually governing for.

Let's Continue the Conversation

Thank you for reading about the business case trap. I'd welcome hearing about your organisation's experience with AI fragmentation - whether you're seeing disconnected pilots, governance gaps between initiatives, or success stories in moving from projects to systematic strategy.




About the Author

Mario Thomas is a transformational business leader with nearly three decades of experience driving operational excellence and revenue growth across global enterprises. As Head of Global Training and Press Spokesperson at Amazon Web Services (AWS), he leads worldwide enablement delivery and operations for one of technology's largest sales forces during a pivotal era of AI innovation. A Chartered Director and Fellow of the Institute of Directors, and an alumnus of the London School of Economics, Mario partners with Boards and C-suite leaders to deliver measurable business outcomes through strategic transformation. His frameworks and methodologies have generated over two-billion dollars in enterprise value through the effective adoption of AI, data, and cloud technologies.