The Great Remaking: How the Four Dimensions of Work Are Transforming

In The Great Remaking, I argued that the gap between organisations redesigning work around AI and those augmenting the status quo is already measurable and compounds over time. That argument rested on a claim I did not fully substantiate: that thinking, deciding, creating, and delivering — the essence of work — are being remade asymmetrically, at different speeds, through different mechanisms, and toward different end states. This article makes that case precisely, because any AI strategy that treats the four dimensions as a single question is working from an incomplete diagnosis.
The analytical frame for that case is the trajectory each dimension is travelling: from AI augmenting human work, through restructuring how that work is organised, toward substituting it entirely. The asymmetry is the point. Thinking work is already at scale on the restructuring curve and approaching substitution in bounded domains. Creating has moved from augmentation to restructuring in three years and is actively targeting substitution for organisations whose primary output is digital content. Deciding is structurally different from the others: AI can increasingly assume the agency, but accountability does not transfer with it — a tension that has no real equivalent in the other three dimensions. Delivering is furthest from full substitution but moving faster than most leadership teams recognise, driven by embodied AI and falling unit costs in robotics rather than the language models that remade the cognitive dimensions.
The dimension that is moving fastest in your sector is where your strategy needs the most precision. A generic approach across all four is not a strategy — it is an assumption.
Thinking: proprietary knowledge as the moat
Of the four dimensions, thinking work was the first to be visibly restructured by AI and is furthest along the curve toward substitution in bounded analytical domains. The speed at which AI can analyse, synthesise, and surface patterns across large data sets now exceeds human capability in any domain where the relevant knowledge is publicly available or can be licensed. Research functions, strategy teams, and competitive intelligence units are not immune from this. The question is not whether AI will restructure their work, but whether the organisation’s unique data and institutional knowledge creates a defensible position once it does.
The augmentation phase — AI as a faster research assistant — is already commoditised. Restructuring means something more fundamental: it means redesigning how the thinking function itself is organised. What analytical work do humans perform versus delegate? How is institutional knowledge captured and made accessible to AI systems? How are the insights generated by human-AI combination turned into proprietary advantage? Organisations whose thinking work relies primarily on publicly available information face the steepest substitution risk. Those whose value lies in closed, institutional data — proprietary transaction history, customer behaviour, operational experience accumulated over decades — have a structural moat, but only if they have invested in making that data accessible and useful to their AI systems. The moat is not the data itself; it is the organisation’s proven capacity to make that data accessible and useful to its AI systems.
The human residual in thinking work is judgement: knowing which questions to ask, what the data does not contain, and when an analysis is technically correct but strategically wrong. AI can process with extraordinary speed and breadth. It cannot yet determine what matters, or when the problem has been framed incorrectly in the first place. That framing capacity is itself informed by relationships, institutional memory, and contextual understanding that AI cannot access from outside the organisation.
McKinsey’s November 2025 Global Survey on the state of AI tested 31 organisational and technical variables to identify what separates organisations seeing material EBIT impact from those that are not. Workflow redesign emerged as one of the strongest predictors — with high performers 2.8 times more likely to have fundamentally redesigned their workflows than their peers. This applies with particular force to thinking work, where the redesign question is not which AI capability to deploy, but how to make the organisation’s institutional knowledge the engine of competitive advantage.
The HBR January 2026 executive AI survey adds structural texture: 39% of organisations now have AI in production at scale, up from just 5% two years earlier, with 94% now beyond pure experimentation. The most revealing finding is that 93% of C-level data and AI leaders cite culture and change management as the single biggest barrier — the highest level in the survey’s history. This is not a technology adoption problem; it is precisely the people-and-redesign challenge McKinsey’s data identifies. The same HBR survey shows 38% of organisations have appointed a Chief AI Officer or equivalent, with a further 52% recognising the need for one. These appointments are a clear directional signal that the organisations serious about moving from augmentation to redesign are reorganising how thinking and knowledge work is governed — not merely tooled.
Deciding: the accountability tension
Deciding is the dimension where AI’s advance is clearest in bounded domains and most contested in complex ones — and where the tension between capability and accountability is sharpest. AI already outperforms humans in many structured decision domains: credit scoring, clinical triage within defined protocols, insurance risk modelling, fraud detection. In these contexts, the question of whether AI is augmenting or substituting human decision-making is increasingly settled by the evidence rather than the preference of the people involved.
What restructuring looks like in practice is more subtle than it first appears. Restructuring in the deciding dimension does not mean AI makes decisions formally on behalf of humans — at least not yet, and not uniformly. It means the decision architecture changes: the scope of decisions that humans make in substance narrows, even when they retain nominal authority. A credit manager who approves or overrides AI recommendations is deciding differently from one who assessed applications from first principles. The cognitive content of the decision — the actual judgement applied — has changed, even if the signature has not. Boards should be asking how many of their organisation’s decisions are substantively human and how many are effectively AI recommendations with human sign-off. That is the question executive teams need to be able to answer.
Deloitte’s March 2026 Global Human Capital Trends report puts the scale of this shift in sharp relief. 60% of executives now regularly use AI to support decisions — yet only 5% of organisations consider themselves leading in AI-augmented decision maturity. Gartner projects that by 2027, half of all business decisions will be augmented or automated by AI agents. The gap between the prevalence of AI-assisted decisions and the governance maturity to oversee them is precisely the accountability gap this section identifies. The structural tension has a precise character: humans are being asked to exercise oversight over decisions that are opaque, high-speed, and increasingly high-volume — conditions that static accountability frameworks were never designed for. RACI models and traditional sign-off hierarchies were built around human-speed decisions; they were not built for the velocity and scale of AI-assisted ones.
This creates a structural tension that has no real equivalent in thinking, creating, or delivering work. AI can increasingly assume the agency in decision work — the capacity to assess, recommend, and in some cases act. But accountability does not transfer with agency. An underwriter’s value is not the analysis; it is the professional commitment and the consequences that attach to it. A Board’s value is not the insight; it is the fiduciary responsibility it carries. As AI assumes more of the decision substance, the question of who bears the consequence when it is wrong has a clear answer: not the AI. The Board bears it. What changes is not accountability — that remains with the humans at the top of the organisation — but the difficulty of exercising meaningful oversight over decisions made at a speed and volume that traditional governance structures were never designed to handle.
This is not a technology question. It is a governance question, and it belongs at Board level. Organisations that allow the substance of decisions to migrate to AI while accountability structures remain unchanged are creating a governance gap that will become visible only when something goes wrong — and at a moment when the trail of responsibility has become genuinely difficult to reconstruct. The human residual in deciding work is the willingness to commit and bear consequences. AI can inform, recommend, and model outcomes with considerable sophistication. It cannot yet be accountable — and in the domains where accountability matters most (capital allocation, legal liability, regulatory exposure, ethical responsibility), that distinction is not just valuable but constitutive of the role.
Creating: the most visible disruption
Creating is the dimension where the remaking is most visible, and where the augmentation-to-restructuring transition has happened fastest. Three years ago, generative AI was a curiosity for creative professionals — interesting, imperfect, and largely supplementary. Today it produces first drafts, generates code, designs products, creates campaigns, and formulates strategic options at a speed and scale that would previously have required large specialist teams. The organisations that recognised this early and redesigned their creative workflows are not running their previous processes faster. They are operating fundamentally different creative architectures.
Restructuring in creating work means the creative process itself changes — not merely the tools used within it. A marketing function that has restructured does not use AI to produce copy faster while doing everything else the same way. It operates with a different ratio of human creative direction to machine execution: more variants, faster testing cycles, and human judgement concentrated on the decisions that genuinely require it — tone, values alignment, brand judgement, cultural resonance — rather than on the volume production, first-draft generation, and format adaptation that AI can handle reliably. The restructured creative organisation is not simply smaller; it is differently configured, typically with higher strategic creative capability and lower execution bottlenecks.
The substitution horizon for creating work is the most explicitly acknowledged of any dimension by those driving the technology. The direction of travel is already being articulated plainly: if an organisation’s primary output is digital rather than physical, the entire creative and production process is theoretically automatable. This is not a near-term risk for most organisations — the value of human taste, cultural resonance, and relational understanding in creative work remains significant, and it is not easily replicated. But organisations whose creative moat relies primarily on execution capacity rather than judgement and originality are more exposed than they may yet recognise.
Stanford HAI’s AI Index finds that employees who guide AI outputs see productivity gains of 30% to 35%, compared to far smaller gains when full automation replaces human oversight — a differential that applies most directly to creative work, where workflows are among the most commonly redesigned in organisations that have moved beyond augmentation. The gap between redesigning and bolt-on augmenting is not incremental; it is structural, and it compounds.
The human residual in creating work is originality, cultural resonance, taste, and meaning — the qualities that make a human-directed piece resonate rather than merely function. The durable human contribution is not production; it is the judgement about what is worth producing, what will land with a specific audience, and what reflects values that cannot be outsourced. Organisations that invest in developing this form of human creative capability alongside AI production capacity are building a more defensible moat than those focused on execution efficiency alone.
Delivering: the physical frontier
Delivering is furthest from full AI substitution, but it is moving faster than most leadership teams recognise — and the mechanism of change is different from the other three dimensions. AI restructured thinking, deciding, and creating primarily through software — from machine learning models quietly reshaping how organisations score risk and forecast demand, to large language models transforming how knowledge work is produced and consumed. Delivering is being restructured through embodied AI: robotics, autonomous systems, computer vision operating in physical environments, and predictive systems that change how physical operations are managed rather than merely monitored.
In delivery work, the augmentation-to-restructuring transition looks different from the cognitive dimensions. Augmentation means AI assists human operators — predictive maintenance alerts, quality control flagging, route optimisation suggestions that humans act on. Restructuring means the operational architecture itself changes: fewer human decision points embedded in the physical workflow, tighter integration of sensing and response, and in some operations, fully autonomous process segments running within defined parameters. The most advanced manufacturing and logistics operations are not running augmented versions of their previous processes; they are operating differently configured physical-digital systems where human oversight is concentrated at higher-order decisions and exception handling rather than embedded throughout the workflow.
The restructuring of delivering work is not a distant prospect — it is already visible in the mundane and the significant alike. Autonomous robotic lawn mowers are now a mainstream consumer product; the task of mowing a lawn has not been augmented, it has been removed from the human schedule entirely. Tesla’s Full Self-Driving system is today completing coast-to-coast journeys across the United States with the human as passenger rather than driver — embodied AI operating in unstructured, high-variability physical environments that would have been considered unsolvable a decade ago. Further along the trajectory, Tesla’s Optimus programme offers a preview of what restructuring looks like in the highest-stakes delivery context of all: physical care. The prospect of a robotic system capable of providing consistent, round-the-clock support to elderly or dependent people at home does not merely augment the work of carers — it restructures who delivers that care, where, and at what cost. None of these examples are speculative. They are points on a curve that is moving faster than most leadership teams have yet priced into their operational planning.
What is accelerating the timeline for delivering work is not primarily AI capability — it is cost. Goldman Sachs Research reports that humanoid manufacturing costs have already fallen 40% year-on-year, with current unit costs now ranging between $30,000 and $150,000 — and further reductions expected in the coming years. Japan’s Robot Association reported record quarterly output in Q4 2025, with 51,797 units produced and 54,740 units ordered, the sixth consecutive quarter of year-on-year growth. At these trajectories, the economics of substituting routine physical work pass viability thresholds in a growing range of sectors well before those sectors have restructured their operating models to anticipate it. Forrester, publishing its first dedicated report on humanoid robotics in March 2026, declared that “2026 marks a turning point” for physical AI — with progress in simulation-to-real-world execution narrowing faster than most operational planning cycles have anticipated.
Beyond manufacturing, the service sector is beginning to face comparable dynamics. An IEEE survey published in December 2025 projects humanoid deployment into controlled service environments — warehouses, hospitals, hospitality — within two to three years, and into semi-public spaces such as corporate campuses and assisted living facilities by 2030–35. The implication for sectors such as logistics, retail, and facilities management is that the window between “this is a manufacturing story” and “this is our story” is narrower than the current state of deployment might suggest.
The human residual in delivering work is adaptability and trust: the capacity to navigate novel physical situations that automated systems cannot yet handle reliably, and the relational dimension of being a trusted service presence. The sheer complexity of real-world physical environments — their variability, edge cases, and unpredictability — provides the most durable buffer against full substitution in this dimension. But that buffer is narrowing, and the speed of narrowing differs significantly by sector.
The asymmetry is the strategy
The four dimensions are not interchangeable and an organisation that treats AI strategy as a single question — how much AI are we deploying? — is asking the wrong question. The strategic question is which dimensions are moving fastest in its sector, where the redesign opportunity is greatest, what the human residual looks like in each case, and how the organisation’s current position maps onto the augmenting-restructuring-substituting trajectory in each dimension separately. The answers to those questions determine where the competitive moat is actually being built — and in which dimensions it is already eroding.
The organisations furthest ahead in The Great Remaking did not apply a uniform approach. They understood the asymmetry, invested where compounding advantage was available soonest, and developed human capability in the forms that remain durable in each dimension rather than in the forms most exposed to substitution. Their data is richer, their talent more capable, and their processes more tightly integrated with their AI systems — not because they spent more, but because they redesigned how work is structured around AI rather than simply adding AI capability to existing structures. McKinsey’s March 2026 analysis finds that organisations redesigning end-to-end workflows around AI achieve 3.6 times higher three-year total shareholder returns and 1.7 times higher revenue growth than laggards — while 81% of organisations report no meaningful bottom-line gains despite widespread experimentation. These outcomes are not the product of procurement decisions; they are the product of redesign decisions taken dimension by dimension.
Understanding where your organisation sits on each dimension’s trajectory is the first requirement of a credible Board-level AI conversation. The second — which the next article in this series examines — is understanding why the gap between those who redesigned and those who augmented compounds over time, and crucially why it cannot be closed through procurement or a single transformation programme. The mechanisms of compounding are as important as the diagnosis itself, because they are what determine the real urgency of the response.
Let's Continue the Conversation
Thank you for reading about how AI is remaking thinking, deciding, creating, and delivering work at different speeds and through different mechanisms. I'd welcome hearing about your organisation's experience navigating this asymmetry — whether you're finding that one dimension is moving far faster than the others in your sector, wrestling with the accountability gap that opens up as AI assumes more of the substance of decisions, or discovering that redesigning workflows produces compounding advantages that bolt-on augmentation simply cannot match. The dimension where Boards tend to underestimate the pace of change varies considerably by industry, and I'd value your perspective on where the remaking is most visible — and most consequential — in your context.




