Cookie Consent

I use cookies to understand how my website is used. This data is collected and processed directly by me, not shared with any third parties, and helps us improve our services. See my privacy and cookie policies for more details.

AI and the Director: A Practical Playbook for Governing What You Can't Fully See

London | Published in Board | 11 minute read |    
A figure in a dark suit, partially concealed behind a heavy charcoal velvet curtain, one hand gripping the curtain edge in sharp directional light against a black background — a visual metaphor for the unseen operator whose workings a director is expected to trust without seeing. (Image generated by ChatGPT 5.4)

The governance frameworks that define the non-executive role were built on a workable assumption: that a director, equipped with the right information and the right structure, could exercise genuine independent judgement. For most strategic questions, that assumption holds. AI is the first major technology wave that breaks it.

A director who cannot evaluate a data strategy cannot challenge it. One who cannot identify AI risk cannot oversee it. And when a director’s understanding of AI decisions comes entirely from the people making them, independent judgement becomes ratification by another name. The distance between what management knows and what a director can independently assess is not merely larger than in previous technology waves — for most Boards, without deliberate development, it is unbridgeable.

The observation is structural, not adversarial: the gap exists because AI creates an informational asymmetry that existing governance frameworks were not built to manage, and the Institute of Directors has formally named it as such. Every director, on every type of Board, has an obligation to address it.

The obligation is no longer advisory

In January 2026, I wrote about The AI Talent Bifurcation, which argued that the skills-versus-credentials distinction dividing the workforce applies with equal force to the boardroom. A director who cannot evaluate AI strategy cannot govern it. That piece identified the governance weakness. This article defines what the remedy requires.

Every non-executive director is expected to perform two distinct functions. The first is oversight: monitoring management’s conduct, evaluating risk, and ensuring the organisation is being run in a manner consistent with its stated strategy and values. The second is constructive challenge: bringing independent perspective to bear on strategic decisions, asking the questions executives are too close to their own work to ask, and ensuring the Board genuinely stress-tests management’s proposals rather than endorsing them. These two functions are the foundational rationale for the non-executive role everywhere it exists, across every major governance framework and every jurisdiction.

Both are structurally compromised by AI illiteracy, simultaneously and for the same underlying reason. A director unable to independently assess AI strategy cannot challenge it constructively, and one unable to evaluate AI risk cannot oversee it meaningfully. The information asymmetry that AI creates between management and the Board is not simply larger than previous technology waves; it is qualitatively different. AI strategies involve probabilistic systems, non-deterministic outputs, and capability claims that require a specific kind of literacy to interrogate. Unlike a financial statement or an operational report, an AI strategy briefing cannot be evaluated by a director who brings no AI-specific conceptual framework to the conversation.

Every director, across every major governance framework, owes a duty of care to the organisation they serve. That standard is not static; it evolves with the materiality of the issues a director is expected to oversee. AI is now unambiguously material to most organisations. A director who takes no steps to develop the capacity to evaluate it is, by any reasonable standard of directorial diligence, falling short. Not because directors must become technologists, but because the appropriate level of AI literacy for a director is now a prerequisite for the role. The analogy to financial literacy is instructive. The expectation that a director should understand the basics of financial reporting was not always settled principle — it became one when financial illiteracy was recognised as a governance failure in its own right. The argument for AI literacy is structurally identical. What changed in financial governance was not the availability of information but the standard of engagement that directors were expected to bring to it. The same shift is underway on AI, and it is further along than many Boards have recognised.

There is one further dimension worth naming. If a director’s view on AI matters is formed entirely from management briefings they cannot interrogate, they are not exercising independent judgement. They are endorsing someone else’s.

The IoD’s 2025 NEDs Reimagined Commission makes this explicit. Recommendation 11 calls on NEDs to build their understanding of AI and adopt relevant tools to enhance Board effectiveness and informed decision-making. The gap is sharpest for non-executives, who carry the greatest need for independent evaluation and the least operational visibility — but the obligation belongs to the Board collectively. The same report surfaces the precise problem: many NEDs lack the relevant knowledge and, critically, do not know where to acquire it. Boardroom conversations about AI remain predominantly defensive, focused on managing risk rather than evaluating opportunity. The Commission offers a pointed observation: Boards that are primarily fearful about technology are most likely the same Boards failing to push management to think ambitiously about its use — a quiet indictment that most governance conversations have not yet sat with long enough.

The obligation is clear. What remains unanswered by most guidance is what directorial AI literacy actually consists of.

What directorial AI literacy actually means

Deloitte’s 2025 Global Board Survey makes the scale of the gap plain: two-thirds of boards still report limited to no knowledge or experience with AI, while one-third of respondents remain not satisfied or concerned with the amount of time their boards devote to discussing it. That is not a technology gap. It is a literacy gap — and the IoD’s own survey data shows what it looks like at director level. Nearly two-thirds of directors now personally use AI tools to aid their work. Half report that their organisation uses AI across any of its functions and processes. Yet a quarter remain concerned about the lack of an internal AI policy, strategy, or data governance framework in their organisation. Directors are experimenting with the technology while simultaneously lacking the governance infrastructure their oversight role requires.

It is worth being precise about what directorial AI literacy is not. A director does not need to understand transformer architecture, prompt engineering, or model configuration. These are operational matters. A director who has spent two days on an AI awareness course and can discuss large language models at a dinner party has not developed directorial AI literacy. They have collected a credential: the same premium-versus-penalty distinction explored in the Bifurcation article, now applied one level up the organisation. The credential provides comfort without capability, which in a governance context is worse than acknowledged ignorance: it closes down the questions that acknowledged ignorance would prompt.

What a director actually needs is a set of four specific capacities, each mapping directly onto a governance function.

The first is the capacity to interrogate maturity claims. Management will present AI initiatives at a level of maturity that reflects their ambitions as much as their reality. A director with genuine AI literacy can ask: where on the adoption curve does this organisation actually sit, and what is the evidence for that assessment? The AI Stages of Adoption, which run from Experimenting and Adopting through Optimising, Transforming, and Scaling, offer a structured lens for exactly this question. Not as a technical tool, but as a set of governance questions. An organisation at the Experimenting stage faces fundamentally different governance obligations than one at Scaling, carrying different risk profiles, different resource requirements, and different accountability structures. A director who cannot place their organisation on that curve, or who accepts management’s placement without interrogating the evidence for it, is governing in the dark.

The second is the capacity to assess governance adequacy. Not every AI governance framework is adequate simply because it exists on paper. A director with genuine literacy can distinguish between governance that has been stress-tested, applied to real deployment decisions, tested against actual edge cases, revised when it failed, and governance that exists for presentational purposes. The difference is visible in how management responds to challenge. Specific, evidence-based responses indicate that governance is operational. Vague reassurances, circular appeals to policy documents, and the inability to cite examples of governance applied in practice are reliable indicators of what might be called governance theatre: the appearance of oversight without the substance of it.

The third is the capacity to identify material AI risk. The IoD identifies six challenges facing NEDs in the AI era: information overload, increased workload, technical literacy gaps, accountability and ethics obligations, erosion of independent oversight, and awareness of AI’s own limitations. These are not equally weighted. The first three are personal development challenges, affecting the director’s own effectiveness. The latter three are governance challenges, because they affect whether the Board can discharge its obligations at all. A director who cannot distinguish between them cannot prioritise oversight appropriately; they risk treating the development challenges as though they were the governance ones, or vice versa.

The fourth capacity, which is the synthesis of the other three, is the capacity to exercise genuinely independent judgement on AI decisions. A director who can interrogate maturity claims, assess governance adequacy, and identify material risk can form a view on AI decisions that is genuinely their own: not a ratification of management’s position dressed as independent governance. This is the destination. The three capacities before it are the route.

These capacities are not acquired through passive awareness. They require a specific kind of active development and a set of questions that directors can begin applying immediately.

The questions every director should be asking

The governance questions that flow from these four capacities divide naturally into three clusters, each corresponding to a core directorial obligation.

On oversight, the diagnostic question is not “do we have AI governance?” but rather “can I evaluate what I am being asked to oversee?” This means asking: do I have sufficient understanding of AI’s capabilities and limitations to evaluate management’s claims about them? If management tells me we have strong AI governance, can I ask a question that tests that assertion rather than simply accepting it? Can I distinguish between AI that is well-governed and AI that is merely well-presented? Can I identify the difference between an AI deployment that is operating as intended and one that has drifted from its original specification without anyone noticing? If the answer to any of these is no, the development priority is clear, because oversight without the capacity to evaluate is oversight in name only.

On independent judgement, the diagnostic question is: is my view genuinely mine? Is my position on AI matters formed from my own analysis, or has it been constructed for me by the briefings I receive? If the AI strategy changed materially, would I know, or would I find out when a journalist asked? Do I have access to independent resources on AI that are not filtered through executive framing? The IoD’s Recommendation 10, calling on NEDs to access their own independent resources and sources of insight, speaks directly to this. Independence of judgement on AI requires independence of information on AI. A director who relies entirely on management for their understanding of the technology, its capabilities, its limitations, and its strategic implications cannot be genuinely independent in their governance of it, however diligently they apply themselves to the briefings they are given.

On strategic challenge, the diagnostic question is: can I interrogate the trajectory, not just the position? Can I assess independently whether the organisation’s AI strategy is likely to create value or erode it? Do I understand where the organisation sits on the adoption curve relative to its competitive context, not only in absolute terms but relative to the pace at which any gap compounds? The pace question matters because AI competitive advantage is not linear. Organisations that are ahead are not simply ahead; they are accumulating data assets, talent capabilities, and process learning that cannot be replicated by catching up on a shorter timeline. A director who grasps only the absolute position (“we are adopting AI”) without understanding the relative trajectory cannot adequately oversee the strategic risk that position represents.

Most directors who sit with these questions will find gaps. That is the point. The IoD survey evidence is clear: the majority of Boards are focused on managing AI risk rather than evaluating AI opportunity, and many directors lack the knowledge they would need to do either well. The gap between recognising AI’s importance and developing the specific literacy to govern it is precisely where most Boards currently sit. A general sense of the territory will not close it.

From awareness to capability

Reading an article is not a substitute for structured development. The frameworks referenced here, including the AI Stages of Adoption and the governance questions mapped to directorial obligations, form part of my broader advisory resource library built for exactly these conversations — a starting point for directors who want to move from a general sense of the territory to something more deliberate.

The principle is straightforward: the smallest structure that is genuinely used will always outperform the most comprehensive framework that sits on a shelf. The questions in this article are designed to be applied at the next Board meeting, not filed away.

The non-executive role was built on the principle of independence. On AI, independence has a prerequisite — and most Boards have not yet been given a systematic way to build it.

Let's Continue the Conversation

Thank you for reading about the governance gap that AI creates for non-executive directors. I'd welcome hearing about your Board's experience navigating this — whether you're finding that your current approach to AI oversight gives you genuine confidence in the assessments you receive from management, discovering that the questions in this article surfaced gaps you hadn't previously named, or working through how to build AI literacy at Board level without it becoming a technical exercise. If you'd like to be among the first to access the Director's AI Governance Readiness Assessment when it launches, let me know in your message and I'll ensure you're notified directly.