The Great Remaking: The Questions Boards Should Be Asking About Their AI Position

In this final part of The Great Remaking series, I build on the analytical case made in the first three parts — what is happening and why it is different, how each dimension is remade, and why waiting compounds the gap — for remaking the essence of work: thinking, deciding, creating, and delivering. That case established that the gap between organisations genuinely redesigning their work around AI and those augmenting the status quo is already measurable in shareholder returns, and that the mechanism of that gap — three self-reinforcing loops that compound proprietary data, operational talent, and institutional redesign knowledge — means every month of delay increases the cost of closing it. What most Boards still lack is the practical diagnostic to assess where, honestly, their organisation sits.
The problem is not that Boards are ignoring AI. It is that the questions most Boards are asking are structured around the wrong measures. Pilot counts, AI budget lines, existence of an AI strategy, and chief AI officer appointments are activity metrics, measuring whether money is being spent and projects are running, which matters, but is not the same thing as measuring whether the organisation is building compounding advantage. A Board that accepts management’s self-reported progress at face value isn’t exercising oversight — it’s ratifying a narrative the evidence shows is systematically inflated.
This article provides a set of probing questions structured around the three loops introduced in last week’s article, together with an interpretive guide to what credible answers look like. This is not a governance checklist. It is a way of telling the difference between organisations genuinely building the loops and those accumulating technology deployments that look like progress but are not.
Why the usual questions fail
Boards commonly ask “how many pilots are in progress?”, “how much are we spending on AI?”, “have we developed an AI strategy?”, “do we need to appoint a Chief AI Officer yet?”. They are a reasonable starting point, but are also an inadequate basis for assessing competitive position, because they measure the wrong layer of the value stack. BCG’s research published in February 2026 found that approximately 10% of AI value comes from algorithms, 20% from the technology required to implement them, and 70% from rethinking the people component. Standard Board AI questions measure the 30% that is technological and replicable by any competitor with access to the same tools. They do not reach the 70% where compounding advantage accumulates. An organisation running twenty pilots without redesigning a single workflow is not building capability in any of the three loops.
The same BCG research found that only 5% of organisations have achieved substantial financial gains from AI. That figure sits uncomfortably alongside a January 2026 HBR survey reporting that 39% of organisations describe AI as being in production at scale. The gap is not explained by poor implementation alone, but by the systematic overestimation bias documented in The AI Maturity Mirage: three patterns by which organisations consistently inflate their AI position, substituting tool deployment for capability, pilot success for scalability, and hype-driven metrics for genuine progress. Management AI self-assessments inherit all three patterns, which means a Board that asks “what is our AI maturity?” and accepts the answer without probing it is not performing oversight. It is endorsing an internal narrative the evidence shows is systematically inflated.
The specific inadequacy of standard Board questions is that none of them address the three compounding loops directly. AI spend says nothing about whether the data loop is operating, pilot count says nothing about whether the talent loop is developing, and strategy existence says nothing about whether the process redesign loop is accumulating institutional learning. The diagnostic that follows is structured to close that gap.
Probing the data loop
The data loop compounds when AI-integrated workflows generate higher-quality, better-structured operational data that feeds back into the organisation’s AI systems and improves their performance over time. For this loop to operate, three conditions must be in place simultaneously: workflows must have been redesigned around AI rather than merely augmented, the data those workflows generate must be captured in a form accessible to AI systems, and there must be a feedback mechanism by which that data improves subsequent AI performance. Most organisations that believe their data loop is operating have satisfied the first condition — they have deployed AI tools into existing processes — without the second or third. Deployment is not redesign, and data generated by augmented workflows often passes through the organisation without ever becoming a compounding asset.
The question Boards should ask is not “do we have good data?” or “are we investing in data infrastructure?” — necessary questions, but not evidence that the loop is functioning. The question that probes the loop directly is whether management can describe a specific workflow that has been redesigned, not augmented, around AI in the past eighteen months, and explain what data that redesign generates that the pre-existing workflow did not. If management cannot name a single such workflow with specificity, the data loop is not operating. That inability is not a failure of intention. It is a precise indicator of where the organisation actually sits on its redesign trajectory.
A second line of inquiry concerns what happens to the operational data that AI-integrated workflows produce. Is it captured, structured, and used to improve subsequent AI performance, or does it pass through the system without feeding back into anything? If management cannot answer that question in concrete terms, the loop isn’t running. Then comes the moat question, which is the one most worth pressing: how does the organisation’s proprietary operational data make its AI systems more effective than a competitor using the same underlying AI capability with generic data? An organisation whose AI advantage is fully replicable by any competitor with access to the same model has not yet built a data loop. Credible answers to these questions are specific, operational, and traceable to named workflows and measurable data assets. Vague references to “data strategy” or “AI-ready infrastructure” without concrete examples of the feedback mechanism are reliable indicators that the loop is not yet genuinely operating.
Probing the talent loop
The talent loop compounds when people develop AI-integrated capabilities through operational experience in redesigned workflows — capabilities that training programmes alone cannot produce and that accumulate through sustained practice in doing the work differently. Most organisations conflate the two — running AI training programmes and assuming the talent loop is therefore operating. It is not. The talent loop requires the organisation to have created conditions in which capability develops through high-quality, consequential human-AI collaboration: redesigned workflows where people make substantive decisions about when to trust AI outputs, where to apply human judgement, and how to interpret AI recommendations in operational context. An organisation that has run extensive AI training programmes but has not restructured the work itself has not built the talent loop. It has made its people more valuable to organisations that have.
The question Boards should ask first is whether the organisation’s people are developing AI capability through doing redesigned work or primarily through structured training programmes. Both matter, but only the former compounds. If the honest answer is primarily the latter, the talent loop is not yet operating at a level that produces structural advantage. Where in the organisation are people making substantively different decisions because of how AI has been integrated into their workflows — not faster versions of the same decisions, but genuinely different ones? In thinking work, are analysts framing problems differently because of what AI has made possible? In deciding work, has the cognitive content of human judgement shifted in ways that management can describe with operational precision? If the answer to those questions is vague, the talent loop is not operating.
The most revealing question a Board can ask is this: if your most AI-capable people left tomorrow, what would the organisation actually lose? Replaceable technical skills that could be filled through hiring, or hard-to-replicate institutional knowledge about how human judgement and AI integrate in your specific operational context? If the honest answer leans toward the former, the talent loop isn’t yet producing the structural advantage that compounds. That accumulated understanding of where AI outputs should be deferred to, where they should be overridden, and the reasoning behind each, is what the talent loop produces when it is genuinely running. The evidence of it is in how individual roles have changed in substance, and whether that capability lives in operational practice or only in training records.
Probing the process redesign loop
The process redesign loop is the most abstract of the three, and therefore the hardest to probe effectively, which means it requires more analytical framing. It compounds when organisations develop institutional capability for redesign itself — the accumulated knowledge of what redesign looks like in their specific context, the cross-functional relationships that enable it, the governance structures that sustain it, and the learning from each iteration that makes the next one faster and more effective. The distinction that matters most is between a project and a practice. An organisation that completed one redesign cycle and returned to the status quo has not activated this loop. The loop requires redesign to become a recurring operational capacity, not a transformation programme with a defined completion date. This is the distinction that most management teams find hardest to articulate, and that difficulty is itself diagnostic.
The first question Boards should ask is how many fundamental restructurings of how work is organised — not augmentations, not pilots — the organisation has completed in the past two years. If the answer is one or two discrete projects, the loop is not yet operating at scale. If the examples offered turn out, on examination, to be AI added to an existing process rather than the process rebuilt around AI, which is the most common conflation, the loop almost certainly is not operating regardless of what internal progress metrics report.
The question that most reliably distinguishes organisations building institutional redesign knowledge from those completing isolated projects is what the organisation has learned about redesign itself, not about any particular AI capability, but about the process of restructuring work around AI. What would management do differently in the next redesign cycle based on what it learned from the last one? If there is no specific, operational answer to that question, only general observations about change management or stakeholder engagement, the process redesign loop is not building the institutional learning that makes each subsequent redesign faster. Organisations with an active loop talk about redesign as a continuous way of working, not a programme with an end date. That shift in language reveals the real operating model more reliably than any progress dashboard ever could.
The questions that change the conversation
This diagnostic is not a replacement for the strategy formulation work covered in the AI Strategy series on this site, where the strategic kernel of diagnosis, guiding policy, and coherent actions remains the right frame for building AI capability. The diagnostic here operates at a different level: not how to build the strategy, but how to test whether it is producing the compounding advantage it claims to.
Asking these questions rather than the standard activity-metric proxies restructures the Board conversation itself. Instead of management presenting progress and the Board accepting or probing the presentation, the three-loop diagnostic gives directors specific questions that are genuinely difficult to answer well with a prepared slide, and an interpretive frame for distinguishing credible answers from the proxies that overestimation tends to produce. Organisations whose management teams can answer specifically and operationally, naming workflows, describing feedback mechanisms, articulating what accumulated capability would be lost, are genuinely building the loops. Those whose responses default to strategy documents, pilot counts, and investment figures are not. That is not a criticism of those management teams. Overestimation is a structural feature of how organisations process AI progress, not a failure of individual teams working hard on difficult problems. The diagnostic is a corrective tool, not a verdict.
The series closes, then, with this observation: understanding what is being remade, how each dimension is changing, why delay compounds the cost of catching up, and how to assess your organisation’s real position are not four separate questions. They are four elements of a single Board-level conversation that few Boards are currently having in a form that translates into genuine financial gains, and the distance between those Boards and the rest suggests those conversations need to begin urgently. The window is narrower than most realise, because the loops, once running, accelerate away from those still measuring activity.
Let's Continue the Conversation
Thank you for reading the final article in The Great Remaking series. If the three-loop diagnostic was useful, I'd genuinely value your perspective on where it lands hardest in practice. Which of the three loops — data, talent, or process redesign — does your board find most difficult to get credible answers on from management? And when you apply these questions mentally to your organisation's current position, does the honest answer surprise you? I'd welcome hearing where the diagnostic creates the most productive friction — whether that's in a board already having this conversation and finding the questions sharper than expected, in one that's realised its standard measures have been missing the 70% that actually compounds, or in one that hasn't yet started and is now wondering where to begin.




