Rethinking Business Cases in the Age of AI: Building Your AI Business Case

In my previous articles, I’ve explored why traditional business cases fall short for AI investments, outlined the essential building blocks for effective evaluation, and provided a methodology for identifying high-value AI opportunities. Now we turn to the practical process of constructing business cases that capture AI’s unique characteristics while providing boards with the clarity they need for confident decision-making.
Having identified promising AI opportunities, the challenge shifts to articulating their value in ways that resonate with decision makers. Through my interactions with Chartered Directors and their Boards, I’ve observed a consistent pattern: the most successful AI business cases don’t simply adapt traditional templates — they fundamentally rethink the approach to capture AI’s distinctive value creation patterns.
The process I’m sharing isn’t theoretical, it’s built from practical experience guiding organisations through AI evaluation and investment decisions. While maintaining financial rigour, it expands beyond conventional models to address AI’s multi-dimensional impact. This approach helps boards understand not just the ‘what’ of AI investment but the ‘why’ and ‘how’ that determine long-term success.
Defining Strategic Intent with the Well-Advised Balanced Scorecard
The foundation of any compelling AI business case lies in clearly articulating strategic intent. While traditional business cases often begin with financial projections, effective AI business cases start with strategic alignment that answers fundamental questions: What business objectives does this initiative advance? How does it align with our broader strategic vision? Which stakeholders benefit and how?
This strategic narrative should explicitly connect the initiative to your organisation’s priorities using the Well-Advised Framework. Developed by me at AWS as an executive-focused counterpart to the technically-oriented Well-Architected Framework, Well-Advised is a universally adaptable balanced scorecard for evaluating AI investments, ensuring strategic alignment across any organisation or industry.
While other evaluation frameworks exist - such as Robert Kaplan and David Norton’s balanced scorecard with its financial, customer, internal process, and learning perspectives - Well-Advised’ inclusion of Responsible Business Transformation as a distinct pillar makes it particularly well-suited for AI business cases, where ethical considerations, governance, and responsible implementation are increasingly critical success factors.
I recommend creating a strategic alignment matrix that maps the initiative against each Well-Advised pillar with specific metrics for each dimension. For example, a manufacturer implementing AI-driven predictive maintenance might display strategic alignment across multiple pillars:
Innovation & New Products/Services:
- Enables transition from equipment sales to uptime-as-a-service business model
- Creates data-driven product improvement feedback loops
- Establishes foundation for broader IoT/edge computing capabilities
Customer Value & Growth:
- Reduces unplanned downtime by 60% for production-critical equipment
- Improves customer satisfaction through increased reliability
- Creates opportunities for premium service tier offerings
Operational Excellence:
- Transforms maintenance from calendar-based to condition-based scheduling
- Extends equipment lifespan by 25-30%
- Optimises spare parts inventory management
Responsible Transformation:
- Reduces waste through optimised component replacement
- Improves sustainability through extended equipment lifecycles
- Creates safer working conditions by predicting potential failures
Revenue, Margin & Profit:
- Reduces maintenance costs by £850K annually
- Creates potential for £2.3M in new service revenue
- Improves production capacity utilisation by 15%
Balancing the Scorecard
For AI portfolio management, the Well-Advised pillars suggest a balanced investment approach:
- 20-25% of initiatives focused primarily on Innovation and New Products/Services
- 20-25% centred on Customer Value and Growth
- 20-25% emphasising Operational Excellence and Efficiency
- 10-15% advancing Responsible Business Transformation
- 20-25% directly driving Revenue, Margin, and Profit
This distribution ensures a balanced portfolio delivering value across all strategic dimensions rather than overweighting initiatives in any single area. The exact distribution should reflect your organisation’s strategic priorities and current position in the AI Stages of Adoption.
Beyond this structured alignment, the qualitative narrative should address the initiative’s impact on key stakeholders. What changes will employees experience? How will customers’ interactions evolve? What new capabilities will the organisation develop? These qualitative dimensions provide crucial context for subsequent quantitative projections.
The strategic narrative should explicitly address the initiative’s position in your AI Stages of Adoption journey. Is this an experimental initiative designed to build initial capabilities? An adoption-stage project scaling proven approaches? Or a transformative initiative reshaping core business processes? This context helps boards understand how the initiative fits within your overall AI maturity progression.
By using Well-Advised as a balanced evaluation framework for AI business cases, you maintain consistency with your established strategic approach while ensuring comprehensive assessment across all dimensions that matter to your organisation.
Building the AI Cost Structure
AI initiatives involve cost categories that differ significantly from traditional technology investments. Effective business cases must capture these unique patterns while providing realistic projections that avoid common pitfalls.
I recommend structuring AI costs across three dimensions (similar to cloud adoption business cases - with some notable additions):
Implementation Costs
Implementation costs extend well beyond technology expenses to include:
- Data preparation and integration: Often the largest but most underestimated cost category, with research from Gartner indicating these activities typically consume 50-65% of AI project resources. While some projects may see higher costs in model development areas, data quality consistently remains the foundation for success.
- Model development and tuning: Includes both initial development and iterative refinement, with costs varying significantly based on model complexity and whether you’re customising existing models or building from scratch.
- Technology infrastructure: Initial acquisition and/or setup costs for cloud resources, specialised hardware, and supporting platforms—distinct from the ongoing operational expenses these will generate.
- Governance framework development: Creating oversight mechanisms, documentation standards, and risk management protocols.
- Integration with existing systems: Adapting workflows and applications to incorporate AI capabilities.
- Testing and validation: Ensuring accuracy, reliability, and ethical performance.
- Capability building: Training teams to work effectively with and alongside AI systems.
A crucial difference from traditional implementations is the iterative nature of these costs. Unlike one-time expenses, AI implementation often involves multiple refinement cycles as models are developed, tested, and improved. Business cases should reflect this pattern rather than assuming linear deployment.
This refinement cycle is somewhat analogous to cost optimisation for cloud workloads. As new generations of compute became available on the cloud, we saw corresponding decreases in costs. In AI terms we need to think about models becoming more sophisticated, whilst at the same time, GPU hardware becoming faster. This should lead to reduced costs over time for like-for-like workloads, but there could also be an increase in some cases because of the need for retraining and fine-tuning.
Operational Costs
Operational expenses for AI systems include several categories that require special consideration:
- Ongoing model monitoring and retraining: Ensuring continued accuracy as data patterns evolve.
- Bias and ethics testing: Regular evaluation to maintain fair, transparent operation.
- Data pipeline maintenance: Ensuring consistent, high-quality data flow.
- Specialised expertise: Data scientists, MLOps engineers, and AI governance specialists.
- Compute and storage resources: These costs are also in traditional business cases, but may be materially larger because of the nature of AI workloads.
- Change management: These costs also form part of traditional cloud adoption and technology business cases, but the specialist nature of AI - the need for Data Scientists for example - could make this cost higher than you would ordinarily plan for.
- Governance and compliance: Maintaining appropriate oversight and documentation.
The most accurate business cases separate these operational costs from implementation expenses to provide realistic ongoing cost projections. They also account for how these costs evolve as the initiative matures through the AI Stages of Adoption, with particular attention to scaling considerations. As I’ve discussed in previous articles, there are significant cross-initiative economies of scale to be gained as organisations progress through the AI adoption journey. Shared data infrastructure, governance frameworks, and technical expertise can be leveraged across multiple AI initiatives, potentially reducing the marginal cost of each additional project. Business cases should explicitly acknowledge these economies of scale to avoid overestimating costs when evaluating portfolios of AI initiatives.
Another important consideration is the variable and sometimes unpredictable nature of operational costs for certain AI workloads. Unlike traditional IT implementations with stable usage patterns, user-facing AI applications such as chatbots can experience significant demand fluctuations, making cost forecasting particularly challenging. The inherent unpredictability of user interactions can lead to considerable variations in inference costs and resource requirements. In contrast, operational AI applications like predictive maintenance typically process more consistent data volumes with established parameters, creating more predictable cost profiles. Business cases should differentiate between these types of applications and include appropriate contingencies for more unpredictable workloads to avoid mid-implementation budget surprises.
Transition Costs
Transition costs address the organisational changes required as AI transforms processes:
- Business process redesign: Adapting workflows to incorporate AI capabilities effectively.
- Training and upskilling: Preparing teams to work with AI systems.
- Role evolution: Supporting employees whose responsibilities change. This dimension mirrors what the Cloud Value Framework categorises as staff productivity — where we quantify how roles transform and how skilled employees’ time is redirected toward higher-value activities.
- Parallel operations: Maintaining existing systems during transition periods. In my experience with cloud transformations, I’ve observed what is referred to as the “double bubble” phenomenon—where organisations run legacy systems alongside new implementations for extended periods. This parallel operation frequently extends beyond initial projections, materially impacting the financial case. AI implementations face similar challenges that must be realistically accounted for.
- Cultural adaptation: Building understanding and acceptance across the organisation. The mass consumerisation of AI through tools like ChatGPT creates a fundamentally different adoption dynamic than we saw with cloud. Where cloud adoption often faced resistance from established IT professionals, AI benefits from widespread personal exposure. This potential acceleration in cultural acceptance represents a strategic advantage for AI business cases, though executive concerns around governance and ethics remain critical considerations.
Transition costs are frequently underestimated or entirely omitted from traditional business cases, yet they often determine success or failure in practice. The most effective business cases explicitly account for these expenses and spread them realistically across implementation phases.
For each cost category, I recommend including both direct financial expenses and resource requirements such as staff time, leadership attention, and organisational capacity. This comprehensive view helps boards understand the full investment required beyond simple budget allocations.
Mapping Value Within the Well-Advised Framework
Building on the Well-Advised balanced scorecard approach, effective AI business cases must trace how value creation manifests specifically within each pillar. Rather than introducing new dimensions, we can map AI capabilities directly to outcomes using Well-Advised, creating a coherent story of value creation that remains consistent with the strategic narrative.
For each Well-Advised pillar, effective business cases must trace how AI capabilities connect to specific outcomes and establish a comprehensive measurement approach spanning different time horizons. The following framework helps boards understand both the causal relationships and how success will be measured across leading, lagging, and predictive indicators.
Well-Advised Pillar | Value Creation Outcomes | Leading Indicators | Lagging Indicators | Predictive Indicators |
---|---|---|---|---|
Innovation & New Products/Services | Enhanced R&D capabilities leading to faster product development cycles; New AI-enabled offerings creating previously impossible value propositions; Platform capabilities enabling ecosystem-based innovation; Data-driven insights uncovering unmet customer needs | Prototype development velocity; AI experiment completion rates; User feedback quality scores; Ideation volume from AI systems | New product revenue; Market share in new segments; Patent applications; Time-to-market reduction | Innovation opportunity forecasts; Market trend predictions; Technology adoption projections; Competitive positioning models |
Customer Value & Growth | Increased personalisation improving customer satisfaction and loyalty; Enhanced service capabilities driving retention and expanded relationships; Improved targeting precision increasing acquisition effectiveness; AI-enhanced experiences creating competitive differentiation | User engagement with AI features; Early adoption metrics; Customer feedback sentiment; Service response improvements | Customer satisfaction scores; Net Promoter Score; Customer lifetime value; Retention rate improvements | Churn probability models; Customer need predictions; Market segment growth forecasts; Personalisation impact simulations |
Operational Excellence | Automated processes reducing manual effort and error rates; Predictive capabilities improving resource allocation and utilisation; Enhanced quality control systems reducing waste and rework; Process optimisation compressing cycle times and improving throughput | Process improvement early results; Error reduction in pilot areas; System reliability metrics; AI-human collaboration efficiency | Cost per transaction; Labour productivity gains; Quality improvement metrics; Cycle time reductions | Process bottleneck predictions; Resource optimisation models; Maintenance requirement forecasts; Capacity utilisation projections |
Responsible Transformation | Improved compliance monitoring reducing regulatory exposure; Enhanced risk detection enabling proactive mitigation; Sustainability improvements through resource optimisation; Workforce augmentation enabling focus on higher-value activities | Governance framework adoption; Risk assessment coverage; Bias testing results; Sustainability initiative metrics | Compliance incident reduction; Risk exposure metrics; Carbon footprint impact; Workforce skill transformation | Regulatory change forecasts; Ethics violation probability; Sustainability impact projections; Workforce transition models |
Revenue, Margin & Profit | Direct cost reduction through operational improvements; Revenue enhancement through improved sales effectiveness; Margin expansion through pricing optimisation; Profit growth through combined efficiency and growth effects | Early revenue indicators; Cost reduction in pilot areas; Pricing optimisation test results; Sales conversion improvements | Revenue growth; Margin expansion; Cost structure improvements; Return on AI investment | Market opportunity forecasts; Profit growth simulations. Economic scenario modelling. Revenue stream diversification projections |
This integrated framework connects the “what” (value creation outcomes) with the “how” (measurement approach) for each Well-Advised pillar. It provides boards with a comprehensive view of value creation that spans from initial pilots to long-term transformation, tracking progress across all stages of AI adoption through:
- Value Creation Outcomes: The specific business benefits AI capabilities deliver within each pillar
- Leading Indicators: Early signals of success that can be measured during pilot phases
- Lagging Indicators: Confirmed value metrics that validate investments after implementation
- Predictive Indicators: Forward-looking metrics that help identify future opportunities
Business cases should clearly articulate which metrics will be tracked at each stage of implementation, creating a cohesive narrative of value creation that spans from initial pilots to long-term transformation.
Creating Value Chain Visualisations
To help boards understand these value connections, I recommend creating visual mappings that show how specific AI capabilities flow through to business outcomes within each Well-Advised pillar. For example, an AI contract analysis system might show a value chain that demonstrates:
AI Capability | Operational Impact | Strategic Outcome | Financial Value |
---|---|---|---|
Document processing | 75% faster review time | Increased contract throughput | £450K annual labour cost reduction |
Pattern recognition | 60% error reduction | Improved compliance and auditability | £300K compliance risk reduction |
Knowledge extraction | Enhanced contract analytics | Stronger negotiation position | £400K improved contract terms |
These visualisations help boards understand both the direct impacts and the network effects that emerge as AI capabilities mature and extend across the organisation. They maintain consistency with our Well-Advised framework while providing the granular cause-effect mapping boards need to evaluate investment cases.
Aligning ROI Horizons with AI Stages of Adoption
Traditional ROI calculations typically focus on a single time horizon—often 1-3 years—which systematically undervalues AI’s longer-term and compound impacts. Rather than using arbitrary timeframes, I recommend aligning return horizons directly with the AI Stages of Adoption framework, creating a more cohesive approach to value measurement.
This alignment helps boards understand how value profiles evolve as initiatives mature through different adoption stages, providing a more intuitive way to evaluate investment cases. It also reinforces the reality that AI initiatives create different types of value at different stages of maturity.
Experimenting and Early Adopting Stage Returns (0-12 months)
The initial stages focus on validating approaches and building foundational capabilities:
- Operational efficiencies: Productivity gains, cost reductions, and quality improvements in targeted areas
- Early revenue impacts: Initial sales increases, retention improvements, or pricing optimisation
- Capability demonstrations: Proof points that validate the approach for further scaling
- Learning value: Knowledge and experience gained that informs future initiatives
While financial metrics provide some validation in this horizon, capability building and learning often represent the most significant value. Early KPIs should include user adoption rates, accuracy measures, time savings, and initial financial impacts, but equally important are the insights gained and capabilities established.
Adopting and Optimising Stage Returns (1-2 years)
As initiatives progress through adoption and into optimisation, value scales across the organisation:
- Expanded application: Extension to additional business units or customer segments
- Process transformation: More fundamental changes to workflows and decision processes
- Enhanced capabilities: More sophisticated models with broader applications
- Synergies with other initiatives: Combinatorial effects with other digital capabilities
This phase typically shows accelerating returns as fixed investments in data, infrastructure, and governance support broader application. ROI calculations should reflect this increasing return rate rather than assuming linear progression. The value created during this stage often justifies the initial investment made during experimentation.
Transforming and Scaling Stage Returns (2+ years)
In the most advanced stages, AI fundamentally reshapes core business capabilities:
- Business model innovation: Fundamental changes to value proposition and delivery
- Market position transformation: Changed competitive dynamics and industry relationships
- Organisational capability evolution: New skills, processes, and decision-making approaches
- Ecosystem leadership: Influence over partners, standards, and industry practices
While these longer-term impacts involve greater uncertainty, they often represent the most significant value potential. The most effective business cases acknowledge this uncertainty while providing scenario-based projections rather than single-point estimates.
For each adoption stage, I recommend including both financial metrics (NPV, ROI, payback period) and strategic impact measures aligned with the Well-Advised Framework. This balanced approach helps boards evaluate both immediate returns and long-term strategic value, making more informed investment decisions than traditional approaches allow.
This adoption-aligned approach is particularly important for organisations in earlier AI Stages of Adoption. Initiatives in the Experimenting and early Adopting stages often show modest initial returns while building capabilities that enable substantially greater value in later stages. Without this stage-based view, boards might prematurely terminate promising initiatives based on initial returns alone, missing their long-term transformative potential.
Designing Validation and Scaling Plans
Effective AI business cases don’t just project outcomes—they outline how value will be validated and then scaled across the organisation. This structured approach provides boards with confidence that investments will deliver claimed returns while anticipating expansion requirements.
The validation and scaling plan should address several key dimensions:
Pilot Design and Success Criteria
The validation approach should define:
- Pilot scope: Specific processes, business units, or customer segments for initial implementation
- Timeline and phases: Key milestones from initial development through validation
- Success metrics: Clear KPIs aligned with projected value across all relevant dimensions
- Decision criteria: Specific thresholds for determining whether to scale, refine, or reconsider
- Feedback loops: Mechanisms to capture learnings throughout the validation process
- Governance protocols: Oversight processes for monitoring progress and managing risks
Effective pilot designs balance sufficient scope to demonstrate value with appropriate constraints to manage risk. They focus on validating not just technical performance but business impact—confirming that AI capabilities translate to meaningful outcomes in real operational contexts.
Scaling Roadmap and Requirements
The scaling plan should outline:
- Expansion pathways: Priority areas for extending capabilities after initial validation
- Capability requirements: Additional infrastructure, governance, and expertise needed
- Integration roadmap: How AI capabilities will embed into core systems and processes
- Resource projections: Budget, staffing, and other resources required across scaling phases
- Risk management approach: How implementation and operational risks will be addressed
- Decision points: Key milestones where expansion decisions will be evaluated
The most effective scaling plans align with your overall position in the AI Stages of Adoption journey. Organisations in earlier stages should focus on building foundational capabilities that enable future scaling, while those in more advanced stages can emphasise broader deployment and transformation potential.
Capability Development Strategy
As I outlined in the Five Pillars framework, sustainable AI value requires developing capabilities across multiple dimensions. The business case should address how the initiative will build these capabilities through:
- Governance evolution: How oversight and accountability mechanisms will mature
- Technical infrastructure development: Platform capabilities that enable scaling
- Operational excellence foundations: Processes that ensure reliable, consistent performance
- Value realisation approaches: Methods for tracking and maximising returns
- People and culture preparation: Skills development and change management
This capability view helps boards understand how individual initiatives contribute to broader organisational readiness. It emphasises the platform value created alongside direct business returns, justifying investments that might otherwise appear marginal when viewed in isolation.
The most compelling business cases present validation and scaling not as sequential processes but as integrated approaches to learning and value creation. They position initial implementations as strategic assets that generate both immediate returns and critical insights for future expansion.
This approach helps overcome the traditional business case challenge of requiring comprehensive plans before implementation. By structuring initiatives as learning-oriented investments with clear validation criteria, organisations can move forward despite initial uncertainty while maintaining appropriate governance and accountability.
Integrating the Components into a Coherent Business Case
While we’ve explored each component separately, effective AI business cases integrate them into a coherent narrative that helps boards understand the complete value proposition. This integration should balance rigorous analysis with strategic vision while focusing on the dimensions that boards care about most.
I recommend structuring AI business cases around four core sections that align with our established frameworks:
Strategic Intent and Well-Advised Alignment: How the initiative advances organisational priorities across the Well-Advised dimensions, with clear connections to strategic goals and stakeholder benefits. This section uses the balanced scorecard approach to demonstrate comprehensive strategic alignment.
Investment Requirements and Cost Structure: Comprehensive financial projections across implementation, operational, and transition dimensions, with phased resource requirements and consideration of cross-initiative economies of scale.
Value Creation and Stage-Based Returns: Multi-horizon value projections aligned with AI Stages of Adoption, mapping how specific AI capabilities drive outcomes within each Well-Advised pillar.
Validation and Scaling Strategy: Pilot approach, success criteria, expansion pathways, and capability development plans that build organisational readiness across the Five Pillars framework.
Each section should include both narrative explanations and structured metrics, creating a balanced case that appeals to different stakeholder perspectives. The financial analysis should maintain traditional rigour while incorporating AI’s unique characteristics, particularly around iterative development, capability building, and non-linear value creation.
For many organisations, particularly those in earlier AI maturity stages, the AI Centre of Excellence plays a crucial role in business case development. The AI CoE can provide standardised templates, governance oversight, and expertise that ensure consistent high-quality business cases across initiatives. This centralised approach helps boards compare opportunities effectively while maintaining appropriate governance standards.
Conclusion: Turning Clarity into Confidence
AI business cases cannot be lifted from traditional templates. They must reflect AI’s iterative, cross-functional nature, its shifting cost structures, and its compound impact over time. By using the Well-Advised Framework, boards can assess AI opportunities with a lens that balances innovation, ethics, operational rigour, and financial value.
The most successful organisations don’t just justify AI investment — they create shared confidence in the path ahead. This clarity enables faster decision-making, reduces implementation risk, and sets the stage for long-term transformation.
Next Steps
To put this into action, I recommend starting with three concrete steps:
- Select one AI opportunity and map its strategic alignment using the Well-Advised Scorecard.
- Construct a value chain visualisation showing how that initiative delivers value across dimensions.
- Model the cost structure and ROI profile across the AI Stages of Adoption to show both immediate and long-term returns.
Boards don’t need perfect foresight to approve AI investments — they need disciplined business cases that reflect reality, risk, and readiness.
In the final article of this series, we’ll explore how to present AI business cases effectively to Boards, addressing common questions and concerns and ensuring stakeholder buy-in while building the understanding needed for confident decision-making. This presentation guidance will help ensure that well-constructed business cases receive appropriate consideration and support from key stakeholders.
Let's Continue the Conversation
I'm interested in hearing about your organisation's experience with building AI business cases. What aspects have you found most challenging? Which approaches have proven most effective in gaining board approval for AI investments?
About the Author
Mario Thomas is a transformational business leader with nearly three decades of experience driving operational excellence and revenue growth across global enterprises. As Head of Global Training and Press Spokesperson at Amazon Web Services (AWS), he leads worldwide enablement delivery and operations for one of technology's largest sales forces during a pivotal era of AI innovation. A Chartered Director and Fellow of the Institute of Directors, and an alumnus of the London School of Economics, Mario partners with Boards and C-suite leaders to deliver measurable business outcomes through strategic transformation. His frameworks and methodologies have generated over two-billion dollars in enterprise value through the effective adoption of AI, data, and cloud technologies.