---
title: 
date: 0001-01-01
canonical: https://mariothomas.com/blog/cfo-compounding-ledger/
---


# The Compounding Ledger: Why AI Spend Breaks the CFO's Rulebook

The clearest number in [Bain's April 2026 CFO Survey](https://www.bain.com/insights/cfos-funded-ai-revolution-now-they-are-joining-it/) is not a headline about adoption or budget intent. It is a gradient. **31%** of CFOs are satisfied with their AI outcomes overall, **41%** among those who have already scaled AI into production, and **over 60%** in the top quartile of AI maturity. Satisfaction moves with scale.

This is not a failure story. It is an inflection story. CFOs who have moved past the pilot stage are seeing returns; the profession as a whole has not yet acquired the instruments to see them consistently. [PwC's 2026 AI Performance Study](https://www.pwc.com/gx/en/news-room/press-releases/2026/pwc-2026-ai-performance-study.html), drawing on interviews with **1,217** senior executives, finds that **74%** of AI's economic value is captured by **20%** of organisations — a concentration only possible where something compounding is at work.

The finance function is not behind the curve. It is standing at the precise point where the profession's instruments are about to evolve. This article diagnoses why the conventional ledger under-reads AI value, describes what the leading CFOs are measuring instead, and sets out the practical moves available now.

## The scale gradient is real

Bain's 2026 CFO Survey, based on responses from **102** CFOs with half from organisations above **$5 billion** in revenue, offers the clearest quantitative evidence yet that AI satisfaction and AI scale move together. The **31%** overall satisfaction figure masks a sharper reality underneath: **41%** among CFOs who have deployed AI at scale, and **over 60%** in the top quartile of AI maturity. Movement along the maturity curve produces measurable improvement in outcome satisfaction.

Bain's own reading is that the satisfaction gradient argues for reframing the CFO's business case around speed — time-to-insight, time-to-action, days-to-close, and forecast cadence — rather than cost alone. That is a sound argument, and it is also a first step rather than a final one.

The concentration pattern visible in adjacent research points in the same direction. PwC's 2026 AI Performance Study finds **74%** of AI's economic value captured by **20%** of organisations, across a sample of **1,217** executives in 25 sectors. [BCG's Widening AI Value Gap work](https://www.bcg.com/press/30september2025-ai-leaders-outpace-laggards-revenue-growth-cost-savings), drawing on survey responses from **1,250** executives across nine industries, reports that the **5%** of organisations classified as "future-built" expect roughly twice the revenue increase and **40%** greater cost reductions than laggards in the areas where they apply AI. Future-built organisations allocate **64%** more of their IT budget to AI and reinvest their returns into further capability, a behaviour pattern that conventional project-ROI scorecards cannot explain.

The investment intent is already flowing. Bain reports that **83%** of CFOs plan to increase enterprise-wide AI spend by more than **15%** over the next two years, with **42%** planning increases of **30%** or more. [Grant Thornton's Q1 2026 CFO Survey](https://www.grantthornton.com/insights/press-releases/2026/march/cfos-accelerate-tech-spending-as-ai-momentum-increases) adds corroboration: **68%** of CFOs expect IT and digital spend to rise over the coming year, the highest figure in the survey's **21-quarter** history. Capital is moving. The instruments have not kept pace.

This is the AI Stages of Adoption rendered at the level of the finance function. Different organisations are at different stages, and the satisfaction data tracks that reality.

## Why the conventional ledger under-reads AI value

AI investment behaves in ways the finance profession's standard instruments were not designed to register. Three characteristics matter.

First, AI assets appreciate rather than depreciate through use. Models improve as they accumulate interaction data. Processes reorganised around AI gain efficiency as staff adapt to them. Governance frameworks built for one application become infrastructure for the next. Conventional depreciation schedules, calibrated to physical assets that wear out, miss this entirely. A fine-tuned model is often worth more in year three than in year one, which is the opposite of the pattern the ledger is built to record.

Second, returns compound through reinvestment rather than accrue linearly. An AI initiative that releases capacity in one function funds the next initiative in another. The governance work done for the first high-risk use case becomes the platform for the next five. BCG's future-built data shows this reinvestment pattern empirically: the leading organisations allocate **64%** more of their IT budget to AI, and the compounding shows up in their three-year performance. Standard ROI calculations, which stop at the attributable benefit of a single project, do not observe the reinvestment multiplier.

Third, value crosses organisational boundaries. The data-preparation work done for a customer-service pilot becomes the foundation for a pricing model in a different business unit. The prompt library developed for one team is reused by another. The skills built by early adopters diffuse through the organisation. Project-level ROI, designed to attribute returns back to the initiative that generated them, cannot see value that surfaces elsewhere. This is the Scaling and Synergy Potential that the conventional ledger was not designed to capture. The same under-reading appears on the cost side: the True Investment Profile systematically undercounts data preparation, retraining cycles, and governance overhead. The ledger understates both the asset and the work that produces it.

The finance function is not alone in this blindness. [The Brookings Institution's January 2026 blueprint](https://www.brookings.edu/articles/counting-ai-a-blueprint-to-integrate-ai-investment-and-use-data-into-us-national-statistics/) for integrating AI investment into US national statistics describes what it calls "the J-curve created by complementary organisational investments": expenditures and organisational adjustment arrive first, depressing measured productivity, while output gains materialise only after firms have redesigned workflows, retrained staff, and integrated AI into decision processes. The same paper argues that national accounts still record AI as ordinary operating cost rather than investment, the same error the project-ROI ledger makes at the organisational level. The measurement problem scales upward.

The international accounting framework already permits the treatment AI capital requires. IAS 38 sets four tests for intangible assets: identifiability, control, measurability, and future economic benefit. Modern AI systems (trained models, curated datasets, prompt libraries, evaluation frameworks) meet all four. The rules permit this treatment; corporate practice has not yet caught up. The Financial Accounting Standards Board's September 2025 update, ASU 2025-06, adjusted the internal-use software capitalisation framework for fiscal years ending after **15 December 2027**, and further movement is likely. The CFO who treats AI as capital is not arguing against the accounting profession. They are arguing alongside it.

## What the leading CFOs measure instead

Bain's finding is that CFOs who have scaled AI identify speed as their biggest AI win, even though cost and efficiency were their original objectives. The implication is that the scorecard should track time-to-insight, time-to-action, days-to-close, forecast refresh cadence, and time-to-variance resolution with the same rigour as cost. When speed becomes the headline metric, the AI value that a slower organisation would have missed becomes legible. Credit where it is due: this is Bain's argument, and it is a concrete first move for any CFO building a more honest scorecard.

The broader move is to adopt a four-dimensional indicator set. The leading CFOs do not rely on any single indicator category. They combine lagging indicators of confirmed past outcomes (cost reductions, time saved, revenue attributed), leading indicators of early signals (decision velocity, process adherence, capability adoption rates), predictive indicators of future value (AI-modelled forecasts of where returns will emerge next), and reasoned indicators derived from automated reasoning or formal verification that prove a condition holds or does not hold. The reasoned category is particularly relevant to the CFO. Automated reasoning can prove that a compliance condition is met, that a financial control has not been breached, or that a regulatory threshold has been maintained, producing outputs that carry a different kind of certainty from probabilistic prediction. Taken together, this four-dimensional indicator set moves the scorecard from retrospective accounting toward forward-looking decision instrumentation.

The harder shift is to treat capability as an asset class rather than an expense line. The work that produces compounding returns, meaning the data cleanup, the governance framework, the prompt library, the trained workforce, and the vendor relationships, currently appears in the ledger as expense. The leading CFOs are building a parallel view that treats these as capital contributions to an organisational capability that will generate returns over multiple years. Within IAS 38 this is defensible. Within the prevailing culture of expense treatment it requires deliberate intervention.

The next shift is from project discipline to portfolio discipline. [Gartner's 2026 guidance, reported via the CFO press](https://www.cfo.com/news/ai-investment-among-top-strategic-priorities-for-cfos-survey/816769/), is that CFOs should evaluate AI spend as a portfolio of distinct use cases with different timelines, risks, and metrics, not through a single ROI formula. The leading CFOs maintain a distribution across quick wins that fund reinvestment, capability builds that create compounding infrastructure, and transformative initiatives that deliver long-horizon value. Imposing uniform project discipline across that distribution collapses it into a single category and loses the compounding mechanism.

Reinvestment itself is the final move. BCG's future-built organisations reinvest AI-generated returns into further AI capability at a rate **64%** higher than laggards. That is not accidental. It is a finance-function decision. The leading CFOs have built the default that productivity gains from AI are reinvested in the next capability rather than absorbed into general operating margin. Without that default, the compounding mechanism has nowhere to compound.

This is the Scaling and Synergy Potential building block operationalised inside the finance function. The instruments already exist. The task is installation, not invention.

## The practical move

The fastest move available to the finance function is to add three categories to the existing AI investment scorecard: time-to-insight metrics that capture speed, cross-functional value attribution that captures the compounding mechanism, and capability accumulation that captures the emerging asset class. None of these require regulatory change. They require a decision about what gets tracked.

Separating the portfolio comes next. Splitting AI spend into its three categories and applying different evaluation criteria to each allows the compounding assets to become visible. Quick wins carry conventional ROI discipline, capability builds are evaluated on platform reuse and enabling value, and transformative initiatives are evaluated on option value and strategic positioning. The single-ROI-formula model is the one to retire. A portfolio measured as though every component were a quick win cannot surface the compounding assets hidden inside it.

Establishing the reinvestment default is a governance decision as much as a measurement one. The explicit default should be that a measurable portion of AI-generated savings is reinvested into further AI capability rather than absorbed into general operating margin. Without this, the compounding mechanism has no fuel. The CFO is well placed to set this default, and well placed to hold the organisation to it.

Early audit committee engagement shapes a conversation that is about to become routine. The movement in international accounting standards is real and ongoing. The CFO who briefs their audit committee now on the IAS 38 criteria and the live debate about AI capital treatment enters the reporting-cycle conversation as an architect rather than as a respondent. Early engagement shapes the agenda. Late engagement inherits it.

The invisible asset must be measured directly on the internal scorecard even while the external financial statements continue to treat it as expense. The capability that AI investment is building (the data, the governance, the skills, the prompt libraries, the vendor relationships) should appear in some form on the management report. Measurement begins inside the organisation. Reporting convention follows.

## The conversation that is about to become standard

The finance function is not behind the curve. It is standing at the precise point where the profession's instruments are about to evolve. The CFO's role in this evolution is architect, not gatekeeper. The shift from measuring AI as a project to measuring AI as capital is a capability the leading CFOs are already building. The opportunity for the rest is to build it now, before the pattern becomes expected.

The mechanism is compounding, and it will keep compounding. Bain's satisfaction gradient, PwC's concentration finding, and BCG's widening leader-laggard gap all point the same way. Organisations whose finance functions measure the compounding mechanism are pulling ahead. Organisations whose finance functions still measure the project are not.

The instruments exist already. The Four Indicator Types, the True Investment Profile, the Scaling and Synergy dimension, the portfolio approach to AI investment: all of these are available to any finance function willing to install them. The task is not invention. The task is installation.

The CFO who leads this evolution converts the AI investment conversation from one about cost justification to one about capital accumulation. That is a different conversation in a different register. It is also the conversation that is about to become standard.

---

## Residual interpretive notes

**Em-dash count — still worth flagging.** The current draft contains one solo em dash (PwC concentration line) and one paired em-dash construction (Bain speed metrics). If you count by rhetorical uses, that is two — within the style-guide limit. If you count by marks, that is three — one over. Your editorial memory notes the rule as "maximum two per article body + summary combined" without specifying which counting convention applies. Both reviewers have counted by uses and passed the piece as compliant; I have kept the current configuration for that reason. If you prefer the strict mark-count, the cleanest single-line fix is converting the PwC solo dash to a comma: "*…captured by 20% of organisations, a concentration only possible where something compounding is at work.*" That edit preserves the paired Bain construction (which is doing real parenthetical work) and brings the mark count to 2.

**Word count.** The round-3 cuts land the piece at approximately 2,035 words, still comfortably inside the 1,900–2,200 band.

**Reviewer pattern.** Worth noting for the record: Grok has now flagged the same four "softening" edits across two rounds and the same two style violations (the fabricated "CFO conversations" cue and the bridging-question sentence) across two rounds. The suggestions have not converged on new territory — which is consistent with both reviewers' overall judgement that the piece is publication-ready and supports calling the draft final after this round unless a genuinely new concern emerges.

**One small alternative if you want a different conclusion tempo.** The current close lands in three beats ("…capital accumulation. / That is a different conversation in a different register. / It is also the conversation that is about to become standard."). If you prefer the two-beat version ChatGPT proposed, it reads: "…capital accumulation — and that is the conversation that is about to become standard." The trade-off is losing a rhythmic pause to gain an em-dash pivot, and it would consume the em-dash budget you would otherwise be preserving for the opening. I would keep the current version, but the alternative is a defensible choice if you want a sharper landing.