Rethinking Business Cases in the Age of AI: Creating the Foundation

When I developed AWS’s first cloud business case tool back in 2015, the challenge was extending traditional IT investment models to capture cloud’s broader business impact. Today’s AI business cases face a much steeper challenge. As we saw in my previous article, AI’s parallel, multi-speed adoption patterns don’t fit neatly into conventional ROI calculations. The stakes are high - McKinsey recently found that despite significant investment, more than 80% of organisations haven’t seen tangible impact on enterprise-level EBIT from generative AI.
So how do we bridge this gap? Through my early work on cloud business cases, and my most recent work with boards and executives on AI adoption, I’ve developed an approach built on five foundational building blocks. Unlike rigid frameworks, these building blocks can be applied flexibly to different AI initiatives while ensuring comprehensive evaluation.
Building Block 1: Strategic Purpose
The first question any AI business case must answer isn’t about technology or even ROI - it’s about strategic purpose. “How does this initiative align with our broader objectives?” This is where my Well-Advised Framework helps boards evaluate AI initiatives across its five key pillars.
When mapping AI initiatives to the Well-Advised pillars, look beyond the obvious primary benefits. For example, in retail, computer vision initiatives that initially seem like operational improvements often enable personalised experiences that dramatically improve customer retention while creating foundations for new service offerings.
When mapping AI initiatives to the Well-Advised pillars, look beyond the obvious primary benefits. A manufacturer’s predictive maintenance AI might start as an operational efficiency play, but its most significant value could emerge in adjacent areas - using the same data to improve product design (Innovation pillar) or enable new service-based revenue streams (Revenue pillar). The richest AI opportunities often deliver across multiple pillars.
This strategic mapping also helps prioritise competing initiatives. When faced with limited resources, give preference to AI investments that deliver value across multiple pillars rather than those with deeper but narrower impact. These multi-dimensional initiatives typically create broader capability foundations that support future AI applications.
I’ve found that a structured business case template that explicitly maps initiatives to the Well-Advised pillars helps boards visualise this strategic alignment. Such templates guide executives to look beyond immediate cost savings to identify broader strategic contributions, ensuring AI investments directly advance organisational priorities rather than creating isolated technology islands.
Below is an example of a strategic alignment assessment I typically include in AI business cases. This approach clearly shows how a predictive maintenance solution delivers value across all five Well-Advised pillars:
Well-Advised Pillar | Strategic Contribution | Value Measurement | Priority |
---|---|---|---|
Innovation & New Services | Creates new data-driven service offerings. Enables predictive vs. reactive maintenance models. Transforms equipment sales into ongoing relationships. | New service revenue opportunity: £XXk. New market segments opened: 2. Competitive differentiation score: High. | 4/5 |
Customer Value & Growth | Improves delivery reliability by 20%. Enhances responsiveness to changing demands. Strengthens customer trust through proactive support. | Customer satisfaction uplift: 15%. On-time delivery improvement: 23%. New customer acquisition potential: 8%. | 4/5 |
Operational Excellence | Reduces unplanned downtime by 60%. Extends equipment lifespan by 35%. Optimises maintenance scheduling. | Annual downtime cost savings: £345k. Productivity improvement: 18%. Labour efficiency gain: 22%. | 5/5 |
Responsible Transformation | Reduces waste through extended equipment life. Decreases energy consumption via optimised operations. Creates sustainable knowledge transfer practices. | Carbon footprint reduction: 11%. Waste reduction: 15 metric tons. Compliance risk mitigation score: Medium. | 3/5 |
Revenue, Margin & Profit | Increases production capacity by 20%. Reduces maintenance costs by 30%. Drives upsell/cross-sell opportunities. | Annual financial impact: £780k. Margin improvement: 4.3%. ROI timeline: 18 months. | 4/5 |
Overall Strategic Alignment Score: 20/25
This initiative delivers exceptional value across multiple Well-Advised pillars, with particularly strong contributions to Operational Excellence (5/5) and balanced impact across Innovation, Customer Value, and Financial Performance (4/5 each). While Responsible Transformation shows moderate alignment (3/5), the initiative’s comprehensive strategic impact makes it a high-priority candidate for board approval.
By visualising strategic alignment in this structured format, boards can quickly assess how AI initiatives support organisational priorities across all dimensions, not just financial performance. This approach naturally guides discussion toward strategic impact rather than getting lost in technical implementation details.
Building Block 2: Value Spectrum
Traditional business cases rely heavily on lagging indicators - metrics that confirm value after it’s been created. For AI initiatives, this perspective is dangerously limited. Drawing from my work on Decision Analytics, I’ve found that effective AI business cases need to consider a full spectrum of value indicators.
Consider a healthcare provider implementing AI for patient triage. Lagging indicators might include reduced waiting times and treatment costs. These metrics are important but tell only part of the story. Leading indicators would track engagement levels with the system, clinician trust scores, and early intervention rates - metrics that signal future value creation before it fully materialises. Most importantly, predictive indicators would model potential outcomes under different scenarios, such as projected reduction in adverse events or improvements in population health metrics.
This three-dimensional value lens helps boards see beyond immediate returns to understand AI’s strategic impact. It addresses one of the key limitations I discussed in my previous article - the tendency to overvalue easily measured benefits while undervaluing strategic advantages.
In the financial services sector, I’ve observed organisations evaluating AI risk assessment tools where immediate cost savings appeared modest. By considering leading and predictive indicators in their value measurement frameworks, these firms often discover the true value - the potential to prevent significant future losses while improving regulatory compliance.
I’ve found that a structured value measurement framework helps boards track these multidimensional benefits. Below is an example of how a value spectrum approach for an AI risk assessment tool in financial services captures value across different time horizons:
Indicator Type | Metrics | Value Dimension | Priority |
---|---|---|---|
Lagging Indicators | Operational cost savings: £120k/year. Manual processing reduction: 67%. Compliance violation reduction: 25%. | Direct financial impact. Operational efficiency. Regulatory compliance. | 3/5 |
Leading Indicators | Risk pattern identification rate: 85%. Decision-making accuracy improvement: 82%. Early warning signal generation: 70%. | Process improvement. Decision quality. Proactive management. | 4/5 |
Predictive Indicators | Dynamic market shock impact modelling: 95% accuracy in stress scenarios. Multi-variable fraud pattern evolution prediction: 8-week advance warning. AI-simulated regulatory scenario projections with 87% confidence rating. | Anticipatory risk governance. Threat vector evolution modelling. Regulatory landscape navigation. | 5/5 |
This three-dimensional value lens helps Boards see beyond immediate cost savings to understand the true strategic impact of AI. While lagging indicators might show modest immediate returns (3/5), predictive indicators reveal the solution’s highest value (5/5) in capabilities that only AI can deliver - like modelling complex market interactions, anticipating how fraud techniques will evolve, and simulating regulatory changes before they occur.
By presenting value across these three dimensions, Boards can make more informed decisions about AI investments that might otherwise be difficult to justify using conventional financial metrics alone.
For Boards, the key is creating balanced scorecards that combine financial metrics with operational, customer, strategic, and risk indicators. This doesn’t mean abandoning financial discipline - rather, it means enriching it with a more complete picture of how AI creates value over time. In practice, I’ve found that a structured value measurement framework helps boards track and evaluate these multi-dimensional benefits systematically, making what was previously intangible much more concrete.
Building Block 3: True Investment Profile
AI costs follow fundamentally different patterns than traditional technology investments. When I advise Boards on AI business cases, I often see the same critical blind spots in their cost estimates. The most obvious issue is underestimating data costs. While technology prices are increasingly transparent, data preparation and governance costs remain stubbornly opaque.
I’ve regularly observed that initial AI budgets typically allocate only 15-20% for data preparation. In reality, organisations consistently end up spending 50-65% of their total project costs on cleaning, structuring, and governing data. This significant discrepancy appears regardless of industry or organisation size. However, as I’ve emphasised in previous talks, perfect data is a path to paralysis. The key is identifying when your data is ‘good enough’ for the specific use case while budgeting realistically for necessary preparation work. Many boards mistakenly believe they need pristine data before starting AI initiatives, when practical experiments with existing data often deliver immediate value while revealing specific quality requirements.
Beyond data, AI projects include several cost categories that rarely appear in traditional technology business cases. These include model monitoring and maintenance, retraining cycles, governance overhead, and specialised talent acquisition.
The timing of AI investments also follows unique patterns aligned with the AI Stages of Adoption. Initial experimentation is relatively inexpensive, but successful pilots trigger larger investment needs for the adoption and optimisation stages. Boards need visibility into this full investment journey, not just the initial pilot costs.
I find it helpful to map investment profiles against expected adoption stages, showing how costs will evolve as initiatives mature. This staged view helps boards make more informed decisions about initial commitments while understanding future funding requirements. It also addresses the false comparison between pilot costs and full implementation value that often distorts AI ROI calculations.
I’ve developed an investment profile calculator that captures these AI-specific cost patterns, helping executives understand the complete financial picture beyond initial implementation. This approach allows for more realistic budgeting and helps prevent the mid-project funding crises that can derail promising AI initiatives.
Building Block 4: Readiness Reality Check
Even the strongest business case collapses without implementation readiness. Through developing the Five Pillars of AI capability, I’ve identified key readiness factors that determine AI success.
Technical readiness extends beyond basic infrastructure to include data access pathways, integration capabilities, and security frameworks. For example, discovering too late that existing data architecture can’t support real-time AI decision-making, requiring costly redesign that could delay initiatives by months.
Data readiness often proves the most challenging aspect. Organisations need pragmatic assessment of whether their data is good enough for the specific use case. This includes understanding data completeness, consistency, and accessibility across organisational boundaries.
People and cultural readiness frequently determines success or failure - as it does for cloud adoption. For example, having all of the technical components in place for an AI quality control system is table-stakes, but so is preparing the production teams who would be working alongside the AI if months of implementation delay are to be avoided because of resistance to change.
Process readiness examines whether existing workflows can effectively incorporate AI insights. A common challenge occurs when organisations implement sophisticated AI detection systems without adapting their manual review processes. This mismatch creates operational bottlenecks where AI-generated alerts require human action but expire before they can be addressed, rendering the entire system ineffective despite its technical capabilities.
Governance readiness is particularly crucial for AI, with its unique ethical and risk dimensions. Board-level oversight requires carefully designed frameworks that establish clear accountability, transparent decision processes, and appropriate escalation paths. Without these governance mechanisms, organisations face significant regulatory and reputational risks, especially as AI regulations continue to evolve globally.
For boards, these readiness assessments provide critical context for investment decisions. They help identify prerequisites that must be addressed before full implementation, and often reveal capability gaps that span multiple initiatives - making them prime candidates for separate investment.
Building Block 5: Scaling and Synergy Potential
Where traditional business cases evaluate initiatives in isolation, AI investments create value through scaling and synergies - addressing a key limitation I identified in my previous article.
Scaling pathways take different forms. Horizontal scaling extends capabilities across business units, as when a customer service chatbot expands from one product line to the entire portfolio. Vertical scaling deepens capabilities within functions, like an AI quality monitoring system that expands from detecting defects to predicting and preventing them. Ecosystem scaling extends capabilities to partners and suppliers, as when a manufacturer shares predictive maintenance insights with equipment vendors to improve future designs.
Even more significant are the synergies between AI initiatives. Data prepared for one project becomes a foundation for others. Governance frameworks developed for initial applications scale across the business. AI expertise built in one department transfers to new use cases. These cross-functional synergies are entirely missed by project-based ROI calculations.
The AI Centre of Excellence plays a crucial role in capturing cross-initiative synergies. AI CoEs help organisations reduce total costs, improve data team productivity, and scale AI to handle more use cases across the business. By identifying duplicate efforts and promoting standardised approaches, they enable significant resource optimisation. According to AWS, AI CoEs recognise common reusable patterns from different business units, reducing redundant work and allowing organisations to implement company-wide AI visions more effectively. This centralised approach to AI implementation helps organisations avoid the inefficiencies of siloed development while creating more robust solutions that serve broader enterprise needs.
For boards, this building block provides perspective on how individual AI investments contribute to broader organisational capabilities. It helps identify initiatives that might show modest standalone ROI but create substantial platform value for future applications.
From Building Blocks to Business Cases
These five building blocks work together to create AI business cases that capture the technology’s unique value creation patterns. Rather than forcing AI into traditional templates, they provide a flexible structure that maintains financial discipline while accommodating AI’s distinctive characteristics.
Different initiatives will naturally emphasise different building blocks. Strategic initiatives might lean heavily on purpose alignment and scaling potential, while operational AI might focus more on readiness assessment and immediate value measurement. The key is applying these building blocks thoughtfully rather than mechanistically.
This approach doesn’t make AI business cases simpler - if anything, it acknowledges their inherent complexity. But it does make them more effective, helping boards distinguish between initiatives that create sustainable advantage and those that deliver only incremental improvements.
From my experience working with dozens of organisations on AI transformation, I’ve found that integrating these building blocks into a cohesive business case template helps boards move beyond theoretical discussions to practical evaluation. Such a structured approach ensures all dimensions are considered while making the process repeatable and comparable across initiatives.
The next challenge is finding the right opportunities to evaluate with this approach. In my next article, I’ll explore how to systematically identify high-value AI use cases across your organisation - moving beyond the obvious applications to discover initiatives with transformative potential.
Let's Continue the Conversation
I'm interested in hearing how you're approaching AI business cases in your organisation. Which of these building blocks proves most challenging in your board discussions? Are there approaches or tools you've found effective for evaluating AI investments?
About the Author
Mario Thomas is a transformational business leader with nearly three decades of experience driving operational excellence and revenue growth across global enterprises. As Head of Global Training and Press Spokesperson at Amazon Web Services (AWS), he leads worldwide enablement delivery and operations for one of technology's largest sales forces during a pivotal era of AI innovation. A Chartered Director and Fellow of the Institute of Directors, and an alumnus of the London School of Economics, Mario partners with Boards and C-suite leaders to deliver measurable business outcomes through strategic transformation. His frameworks and methodologies have generated over two-billion dollars in enterprise value through the effective adoption of AI, data, and cloud technologies.