AI and the Chair: Governing the Board Through The Great Remaking

The chair of a board remains accountable for the Board’s effectiveness, but is no longer fully in control of how that Board’s decisions are being formed. By the time the chair has the chance to ensure that the Board is governing rather than ratifying, the framing has often already settled. Something has shifted under chairs’ feet that has not yet been named.
The constitutional foundation has not moved. Cadbury wrote in 1992 that “the chairman’s role in securing good corporate governance is crucial”. The Companies Act 2006 codified the general duties every director owes to the company, including the statutory duty to exercise independent judgement. The FRC’s 2024 UK Corporate Governance Code carries the same point forward: the chair is responsible for the Board’s overall effectiveness. The IoD’s NEDs Reimagined Commission, reporting in January 2026, reaffirmed the same point through the lens of NED effectiveness, naming the chair as a crucial player in setting the tone for effective governance, and has begun naming the new terrain at the NED level by identifying technical literacy gaps, the erosion of independence, and the limits of AI judgement as challenges directors must now address.
The Great Remaking series argued that AI is remaking the essence of work, not merely automating tasks within it. The boardroom is no exception. AI in the preparation of board materials risks the silent erosion of director judgement; AI in the operations of the business produces decision opacity that directors cannot pierce. The Board is not a bystander to the remaking but one of the things being remade, as is the work it governs. The chair sits between both, accountable for both, without clear line of sight into how AI is reshaping either.
This article does not propose new chair responsibilities. The source material gives chairs more than enough already. What has changed are the conditions of execution. The article walks through several existing chair responsibilities, asks how AI changes their execution in each state of the duality, and identifies what remains constant beneath the change. The chair’s job is no longer simply to discharge those responsibilities. It is to hold the line at which silent erosion is now most likely.
The principle the chair polices
Cadbury named the principle in 1992 as collective responsibility in law: all directors are equally responsible in law for the Board’s actions and decisions, and the obligation to meet them rests with the Board collectively. What the Board experiences in practice is collective accountability. That is the principle on which Board governance rests, and nothing in the AI era weakens it.
Within that principle, agency and accountability behave differently. Agency, the doing of specific work, can be transferred. A director can use AI to summarise a board pack, generate questions, model a scenario, or analyse a dataset, and none of those activities is illegitimate in itself. Accountability cannot be transferred at all. The director remains fully responsible for whether the AI’s output is accurate, what it might have missed, and the contribution they make on the basis of it. Critical, analytical, systems, and creative thinking applied to whatever the AI produces is the layer that keeps accountability where it belongs. Agency can be transferred. Accountability cannot.
Two things break at once when accountability silently transfers with agency. The individual director’s contribution becomes ratification rather than judgement, even when ratification was not what was intended. The Board’s collective ownership of the decision is weakened too, because every silent accountability transfer is one fewer voice carrying its share of the collective burden.
Collective accountability does not exist passively. It is actively maintained, and under Cadbury, the Companies Act, the FRC, and the IoD alike, the chair is the actor responsible for maintaining it. The chair is not the custodian of collective accountability; they police it. Every existing chair responsibility, whether agenda, information flow, director enablement, the chief executive relationship, performance review, or the modelling of conduct, is in service of that work. The duality changes where the boundary now sits. It does not change what is being protected.
AI in the Board: the behavioural critique
A growing body of commentary celebrates chairs and directors using AI tools in their work — running board materials through ChatGPT, generating questions, drafting minutes — and presents it all as the leading edge of governance modernisation. INSEAD researchers, drawing on focus groups with more than 50 board chairs and committee heads from global companies, have argued that AI improves directors’ preparation, enriches the board’s collective intelligence, and may eventually take part in boardroom discussions. Heidrick & Struggles reinforce the same framing. The commentary suggests that tool adoption is governance transformation. It is not. Tool adoption is the appearance of transformation; in some forms, it is the substitution of director judgement for machine output.
The behavioural failure modes are not difficult to name. A director who reads only the AI-generated summary of a 200-page board pack and asks only the questions the AI generated has not done their work; they have outsourced it to a system that cannot be held accountable for what it missed, or for the framing it imposed. A director who uses AI to generate the questions they ask in the boardroom is not performing the function they were appointed to perform. As an NED put it in a recent Board meeting I attended, “isn’t generating our own questions the whole reason we’re on this Board?” A director who treats six months of ChatGPT use as evidence of AI capability has acquired credentials, not capability — a worse position than acknowledged ignorance, because they believe themselves equipped to challenge AI matters they cannot interrogate.
The same pattern repeats in each case: agency goes to the machine, accountability stays formally with the director, and the thinking layer that connects the two has dissolved. The director appears more capable while becoming less so, and the Board appears modern while losing the substance of its independence. AI and the Director examined this dynamic from the director’s perspective; the chair sees the same problem from the other side, with the additional responsibility of policing it.
The FRC charges chairs with ensuring that all directors “continually update their skills, knowledge and familiarity with the company.” Under AI conditions, that responsibility now includes policing the boundary between agency transfer and accountability transfer. The point is not to restrain tool use; it is to ensure that directors apply the necessary thinking to whatever the AI produces, every time, without exception. The chair who notices a director quoting an AI summary in board discussion has noticed something important; the chair who lets it pass has accepted the quiet transfer of that director’s accountability, and by extension the Board’s.
The information flow responsibility is sharpened by the same logic. Cadbury charges chairs with ensuring that NEDs receive timely, relevant information tailored to their needs. When some of that information now reaches directors pre-summarised by AI, whether produced by directors themselves, the company secretariat, or management, the chair’s responsibility is no longer simply to ensure that information arrives. It is to ensure directors receive it in a form, and at a depth, at which judgement can still be applied. AI-generated summaries that flatten complexity are a chair’s problem, not a productivity feature.
The boardroom dynamic is sharpened too. The IoD’s Commission warned that NEDs can be inhibited from expressing their true opinions because of poor chairmanship. Under AI conditions, a new inhibition appears: the AI-fluent director becomes the de facto translator and interrogator on AI matters, and the rest of the Board defers rather than develops its own capability. Heidrick & Struggles was explicit that concentrating AI knowledge in a single seat is counterproductive; Russell Reynolds’ February 2026 analysis of 398 public company boards and 3,400 directors reached the same conclusion from the depth-versus-breadth angle. The chair must refuse the proxy. Every director carries a share of collective accountability, so every director must engage. The chair who allows the AI conversation to become the AI-fluent director’s conversation has allowed the rest of the Board to silently exit the obligation Cadbury imposed on them in 1992.
AI in the business: the literacy problem
The same commentary applied to AI in the Board has a deeper problem when applied to AI in the business: it treats AI as if it were a single thing, with generative AI fluency standing in as a reasonable proxy for AI literacy more broadly. It is not, and the skills do not transfer. ChatGPT literacy is not AI governance capability.
The mismatch becomes concrete the moment a chair looks at what the business the Board governs is actually running. Most consequential AI deployments are not generative. They are machine learning models making credit decisions, computer vision systems performing safety inspections, reinforcement learning agents optimising logistics, predictive maintenance algorithms, recommendation systems, fraud detection models, or autonomous decision systems. Knowing how to prompt a chatbot does not equip a director to interrogate the training data of a credit model, the failure modes of a computer vision system, the value-alignment problem of a reinforcement learning agent, the drift detection regime around a predictive maintenance algorithm, or the human-override architecture of an autonomous deployment. A bank’s directors, a manufacturing company’s directors, a healthcare organisation’s directors, and a retailer’s directors all need different AI capability. The literature has been treating them as if they need the same one.
The chair’s responsibility is therefore precise. The chair must ensure that the Board has the capability to challenge whatever AI the business is actually using, not just the AI that happens to be in the news. The FRC phrase carries an operational specificity in the AI era it did not previously need to carry. Familiarity with the company means familiarity with the AI the company is actually deploying. The duty has not changed; what discharging it requires has, and most chairs and most Boards have not yet recognised the gap.
The Six Board Concerns are the lens that prevents the AI conversation from collapsing into something narrower than it is. When AI appears on the agenda, the conversation defaults to risk management, which is one of the concerns but only one. Strategic Alignment, Ethical and Legal Responsibility, Financial and Operational Impact, Risk Management, Stakeholder Confidence, and Safeguarding Innovation form an interconnected system; a Board that discusses AI only through the Risk Management lens has discharged one-sixth of its governance obligation. The chair’s responsibility is to refuse that collapse.
The empirical evidence on agenda discipline is stark. The 2026 Global Board Governance Survey from Protiviti and BoardProspects, published in March 2026 with 772 respondents, found that only 26% of boards discuss AI at every meeting. Among organisations reporting high AI ROI, 63% include AI at every meeting; among low-ROI organisations, only 13% do. The complementary findings tell the same story: 95% of high-ROI organisations report confidence in their ability to integrate AI into operations, against 33% of low-ROI organisations; 93% report confidence in their responsible AI strategy, against 42%. Agenda discipline is not academic. It correlates directly with whether the organisation is realising value from AI at all.
The chair-CEO relationship sits inside the same problem. Cadbury separated the chair from the chief executive precisely so that no individual would have unfettered powers of decision. The IoD Commission named the modern failure mode: issues brought to the Board after management and chair engagement outside the boardroom, presented as a fait accompli. The pathology is at its worst on AI, where operational deployments are technically dense and executives may prefer to present them as settled. The FRC asks chairs to maintain a productive working relationship with the chief executive while preserving the capacity for constructive challenge. Under AI conditions, that means resisting pre-cooked framing on operational AI matters and ensuring that AI questions reach the Board with optionality intact. A chair who allows the AI proposal to arrive at the boardroom door already settled has allowed the Board to be reduced to its ratifying function on exactly the matter where its independent judgement is most needed.
Cadbury also charged chairs with ensuring that executive directors look beyond their executive duties and accept their full share of governance. The point is unusually relevant for AI. Executive directors can disengage from AI governance questions on the grounds that the matter belongs to the CTO. The chair must refuse that disengagement. AI is a governance matter for the entire Board, executive directors included. No director may opt out of the collective obligation; the line runs through every seat at the table, not just the non-executive ones.
The bifurcation risk inside the boardroom
Earlier work on the AI capability bifurcation argued that workforces are splitting into premium-earning and penalty-suffering bands depending on whether they have built genuine capability or accumulated credentials. The same dynamic now operates inside the boardroom. It is the unifying explanation for why the failure modes in both states of the duality are silent rather than visible.
A Board that uses ChatGPT in its preparation, has run an AI awareness session with an external speaker, can discuss large language models confidently in conversation, and has appointed a director with a tech background can feel itself to be modern. The credentials are real. The capability is a different matter: the ability to challenge the AI the business actually runs, to apply genuine judgement to AI-generated outputs in board materials, and to police the agency-accountability boundary. That capability may not exist. That is the boardroom on the penalty side of the bifurcation, disguised as the boardroom on the premium side.
A director who cannot challenge an AI proposal is not relieved of accountability for it because another director can. They are accountable for a decision they cannot interrogate, which is a worse position than acknowledged ignorance, and one no chair should permit a director on their Board to occupy. Bifurcation is the explanation for why that position is now so common. It is also why the chair’s policing of the boundary is now so consequential.
The line that does not move
The chair’s role was built to defend the Board’s collective accountability against the concentration of power, against information asymmetry, and against the silent erosion of independent judgement. Cadbury wrote those defences in 1992. The FRC restated them in 2024. The IoD’s NEDs Reimagined Commission affirmed them again in 2026. None of those defences is dispensable in the AI era. All of them now require different execution.
AI in the Board and AI in the business are not two new responsibilities. They are two states in which every existing chair responsibility now operates, with new failure modes in each. The chair sits between both, accountable for both, with full line of sight to neither. That tension is the chair’s working condition.
The thing that does not move is the constitutional principle. Agency can be transferred to machines for the doing of specific work. Accountability cannot be transferred at all. Collective accountability is the form that principle takes at the level of the Board. The chair polices its boundary. That responsibility predates AI and will outlast every specific AI technology the business currently runs.
The Board the chair leads is being remade: its work, its information environment, its capability requirements, its decision cadence. The organisation the Board governs is being remade more visibly still. What the chair owes the Board, through both remakings, is the active protection of the line that does not move. Everything else can change. That cannot.
Let's Continue the Conversation
Thank you for reading about how chairs police the boundary of collective accountability through AI's two simultaneous remakings. I'd welcome hearing about your Board's experience navigating the duality — whether you're refusing the proxy when AI matters concentrate around the AI-fluent director, working to ensure information reaches the Board in a form at which judgement can still be applied, or resisting fait accompli framing when operational AI deployments arrive at the boardroom door already settled. The piece argues that ChatGPT literacy is not AI governance capability; I'd be interested in how that distinction is showing up in your own boardroom conversations.




