---
title: The Reasoning Gap: The Capability the Law Now Demands of Boards
date: 2026-05-03
description: From 5 February, UK law requires four safeguards for solely automated decisions. Most probabilistic systems cannot deliver them yet.
author: map[email:mario@mariothomas.com name:Mario Thomas]
canonical: https://mariothomas.com/blog/the-reasoning-gap/
---

The UK regime now requires four safeguards for any significant decision taken solely by automated processing: information, representations, human intervention, contestability. On the page these are procedural rights. In practice they all depend on something the law does not name: whether the organisation can interrogate its own decisions well enough for the safeguards to work. For a rule-based system, that capability is built in. For a probabilistic system, it is not, and most Boards have approved those systems without ever asking whether it exists. The first contestability request is when the gap surfaces.
<!--more-->
{{< image3 src="the-reasoning-gap" type="photo" alt="A polished walnut boardroom table photographed at eye level, with a tan folder embossed 'System Approved' resting flat on the left and a white envelope marked 'Notice of Contest' standing upright in a brass holder on the right. Empty leather chairs line the far side of the table; cold morning light falls through tall windows behind, illuminating the envelope sharply (Image generated by ChatGPT 5.4)" width="735" height="413">}}

{{< audio2 src="mp3/the-reasoning-gap.mp3" >}}

A [short statutory instrument](https://www.legislation.gov.uk/uksi/2026/425/made?view=plain) lands on **12 May 2026**. It directs the Information Commissioner to prepare a statutory code of practice on artificial intelligence and automated decision-making. The interesting question is not what the code will say. It is what the law already requires.

Since **5 February 2026**, the UK GDPR, as amended by the [Data (Use and Access) Act 2025](https://www.legislation.gov.uk/ukpga/2025/18/contents) (DUAA), has required four safeguards for any significant decision taken solely by automated processing: **information** about the decision, the ability to make **representations** about it, **human intervention** on the part of the controller, and the right to **contest** the outcome. On the page these are procedural rights of the kind UK practitioners have written about for nearly a decade. In practice they depend on something the law does not name. They depend on whether the organisation can interrogate its own decisions well enough for the safeguards to work.

The DUAA replaces the old Article 22 of the UK GDPR with a new set of provisions, Articles 22A to 22D. This is a reorganisation of the regime, not simply a relaxation. The previous framework largely prohibited solely automated decision-making with significant effect except in narrow circumstances. The new framework permits it more widely, in exchange for the four safeguards. Practitioner analyses from [Travers Smith](https://www.traverssmith.com/knowledge/knowledge-container/uks-data-protection-reforms-take-effect-a-new-era-for-automated-decision-making/), [Bird & Bird](https://www.twobirds.com/en/insights/2026/uk/uk-gdpr-uk-privacy-reform-is-finally-going-live--what-does-your-business-need-to-do-now), [Debevoise](https://www.debevoisedatablog.com/2025/11/19/the-uks-new-automated-decision-making-rules-and-how-they-compare-to-the-eu-gdpr/), and [Alston & Bird](https://www.alston.com/en/insights/publications/2026/01/uk-data-use-and-access-act-2025) have all set the position out clearly. The legal architecture is settled. The operational implication is not.

For a rule-based system, the capability is built in: the recipe is the system. For a probabilistic system, it is not, and most Boards have approved probabilistic systems for recruitment screening, credit decisioning, fraud detection, content moderation, and dynamic pricing without ever asking whether the system can explain itself to the person it has just decided about. The first contestability request is when the gap surfaces, usually in a regulator's letter or a tribunal claim rather than a Board paper.

## Explainability in all but name

The word *explainability* does not appear in the statute. The duty does. The existing UK GDPR transparency obligations, carried over from the pre-DUAA regime, require that data subjects be given meaningful information about the logic involved in automated decisions affecting them. The four safeguards are, in substance, exercises in interrogating a decision after the fact. In practice, all four depend heavily on the organisation being able to engage with the reasoning of the decision being challenged.

That is the move from compliance to capability. The safeguards are not boxes to tick. They are functions the system must be able to perform.

The trigger is technology-neutral. A decision counts as solely automated where there is no meaningful human involvement, a standard refined by the CJEU's [*SCHUFA* judgment](https://curia.europa.eu/juris/liste.jsf?num=C-634/21) and the [WP29 guidelines on automated decision-making and profiling](https://ec.europa.eu/newsroom/article29/items/612053) endorsed by the EDPB. The four safeguards apply uniformly across the systems that meet that test. The law does not distinguish probabilistic from deterministic, machine learning from rule-based logic, opaque models from inspectable ones. From the data subject's perspective the duty is the same. From the organisation's perspective, the operational cost of meeting that duty is not.

This is where the Reasoning Gap opens.

## Two classes of system, one legal duty

A rule-based system carries its reasoning on the surface. The eligibility logic, the scoring rule, the threshold calculation: these are the system. When a data subject contests a decision, the organisation can show what rule was applied, what facts it operated on, and what output followed. Information, representations, human intervention, and the right to contest all operate on inspectable logic. The capability is built in.

A probabilistic system does not work this way. A model trained on patterns in historical data produces outputs whose internal logic cannot be inspected for any individual case. There is no rule to point to. The system has weights, activations, and statistical correlations, none of which translate into a reason a data subject will accept or a tribunal will recognise. The capability has to be engineered in at design time. It is engineered through decision logs that capture which features mattered for which decisions, through counterfactual explanations that show what would have changed the outcome, through model cards that document the system's intended use and known limitations, and through human-in-the-loop checkpoints designed as workflow rather than ceremony.

These are not exotic techniques. They are well developed in the explainable AI literature and increasingly available in production tooling. Decision logs, counterfactual reasoning, and model documentation are the mechanisms through which that capability is engineered. Counterfactual explanations answer the question of what would have had to be different for the decision to go the other way, which is closer to what a data subject actually wants to know than a probability score. Model cards document, before deployment, what the system is for, what it has been validated against, and where it is known to be unreliable.

Most production systems are hybrid: probabilistic components feeding deterministic rules, or rule-based gates wrapped around scoring models. That makes the gap harder to see, not easier to close. The capability question has to be asked of the decision logic at the point where the consequential output is generated, not of the system as a whole.

The point is not that one class of system is safer than the other. Rule-based systems can be misspecified, can encode discriminatory rules, and can be applied to the wrong facts. Probabilistic systems can be more accurate, more adaptive, and better suited to certain decision domains. The point is that the four safeguards work differently against the two classes, and the operational cost of delivering them is structurally different.

The regime was not designed with this distinction in mind. It imposes a uniform duty on systems whose capability to meet it is anything but uniform.

## The gap most Boards have already approved

The shadow AI numbers are well known. [UpGuard's November 2025 research](https://www.upguard.com/press/new-research-from-upguard-reveals-68-of-security-leaders-admit-to-unauthorized-ai-usage) found that more than **80%** of employees use unapproved AI tools, including nearly **90%** of security professionals. [BCG's *AI at Work 2025*](https://web-assets.bcg.com/fd/0d/bcc5dfae4cbaa08c718b95b16cf5/ai-at-work-2025-slideshow-june-2025-edit-02.pdf) found that **54%** of employees would use unauthorised tools when corporate solutions fall short. Boards have started to engage with that exposure. The silent equivalent in *approved* systems has not surfaced.

In the past three years, Boards have approved automated decision-making systems across the operational spine of the organisation. Many are probabilistic. Most were approved on the strength of accuracy metrics and business case, not on the strength of evidence that the system could explain its decisions to the person it had just decided about. That absent capability is now a legal exposure, and the [Six Board Concerns](/blog/board-ai-governance-priorities/) frame it directly.

Ethical and Legal Responsibility is the obvious one, since the duty now sits in primary legislation. Risk Management follows close behind, with litigation and enforcement exposure that falls hardest on the probabilistic systems Boards have approved most readily. Strategic Alignment is the third concern: the systems most exposed to the gap are often the ones doing most of the commercial work. The IoD's *[AI Governance in the Boardroom](https://www.iod.com/resources/business-advice/ai-governance-in-the-boardroom/)* (2025) describes the UK as a sector-led regulatory environment with the ICO as a central actor, which is precisely the environment in which a poorly handled contest becomes an enforcement matter rather than a private dispute.

This is not a hypothetical. It is the operational reality in most large organisations as of **5 February 2026**.

## What Minimum Lovable Governance actually does here

Most organisations do not fail here because they lack policy. They fail because the system was never designed to answer the question being asked.

I've described [Minimum Lovable Governance](/toolkit/minimum-lovable-governance/) as the alternative to heavyweight compliance: embedded in how work happens, proportionate to risk, continuous rather than episodic, and lovable in the sense that people use it because it works. The DUAA is what that framing was always for, because Minimum Lovable Governance is not simply a softer alternative to traditional compliance but the operating principle through which a duty like this one actually gets delivered. The Reasoning Gap is not a documentation problem; it is a design problem.

Five priorities follow for Boards now. The first is to inventory the relevant systems by epistemic type, not by automated decision-making as a single category. The DUAA-relevant systems in any organisation are not a homogeneous list. They need mapping by the character of their decision logic, whether rule-based, probabilistic, or hybrid. For each, the assessment is what reasoning the system can reconstruct and what would be required to deliver the four safeguards in operation. This is governance work the privacy team cannot do alone. It needs the engineering function in the room.

The second priority is to treat the four safeguards as capability tests rather than policy positions. For each significant-decision system, can the organisation deliver information, representations, human intervention, and the right to contest in operation, tested against a worked example? The systems that fail those tests are the ones that need design work, not policy work. The third follows directly: build explainability as workflow, not policy. Decision logs at the point of decision, counterfactual reasoning available to reviewers, model cards maintained as living artefacts. Explainability is not a document; it is a function the system performs. Minimum Lovable Governance's embedded-not-imposed test applies directly here.

The fourth is the most consequential. The Board's role is not to build the capability. It is to refuse to approve systems that cannot deliver the duty. New systems entering production, and renewed approvals for existing systems, should now turn on whether the four safeguards can be delivered as workflow, not on whether they are addressed in the privacy notice.

The fifth is positional. The [Information Commissioner](https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/individual-rights/individual-rights/rights-related-to-automated-decision-making-including-profiling/) is now formally directed to prepare the code, and consultation will follow. The proportionality argument, that the safeguards should be operationalised differently for systems with different epistemic bases, needs to be made by people who understand the engineering and not only by privacy lawyers. Boards have a stake in shaping that.

The strategic move is not to wait for the code. It is to have the capability in place before it lands, and to be one of the voices arguing what proportionate operationalisation looks like when it does.

## The work begins now

**5 February 2026** changed what Boards are accountable for, even if most Boards have not been told. The four safeguards are mandatory now for any significant decision taken solely by automated processing. The 12 May statutory instrument changes the legal weight of the standard the Information Commissioner is empowered to set, but the duty itself is already in force.

The law assumes the capability it requires rather than mandating it. In a rule-based system, the assumption holds. In a probabilistic system, the capability does not exist by default — it has to be engineered in at design time, and most Boards have approved probabilistic systems without confirming it is there.

The Reasoning Gap will surface in enforcement, not in Board papers. The first contestability request is when the absence of capability becomes visible. By then, the cost of closing the gap is several orders of magnitude higher than the cost of building it now.

Minimum Lovable Governance is the operating principle that closes the gap: capability built into the way decisions are made, not bolted on as a policy layer. The Board's role is not to debate that capability. It is to require it, or refuse to approve the system at all.

{{< campaign "the-reasoning-gap" "hello@mariothomas.com" "Hello" "Let's Continue the Conversation" "Thank you for reading about the Reasoning Gap and the capability the new automated decision-making regime now demands of Boards. I'd welcome hearing about your Board's experience confronting this question — whether you're inventorying approved systems by epistemic type for the first time, wrestling with how to deliver the four safeguards in operation rather than in policy, or finding ways to make explainability a function the system performs rather than a document it produces." "Thank you for submitting your details, here's what you provided:" "Click send to share your input with me." >}}