I use cookies to understand how my website is used. This data is collected and processed directly by me, not shared with any third parties, and helps us improve our services. See my privacy and cookie policies for more details.
Tagged with: #board-director
Posts tagged with #board-director cover areas such as fulfilling fiduciary duties in the age of AI. Presenting frameworks for responsible AI oversight that balance innovation with appropriate controls.
London |
Published in
AI
and
Board
| 11 minute read |
The UK regime now requires four safeguards for any significant decision taken solely by automated processing: information, representations, human intervention, contestability. On the page these are procedural rights. In practice they all depend on something the law does not name: whether the organisation can interrogate its own decisions well enough for the safeguards to work. For a rule-based system, that capability is built in. For a probabilistic system, it is not, and most Boards have approved those systems without ever asking whether it exists. The first contestability request is when the gap surfaces.
Llantwit Major |
Published in
Board
| 14 minute read |
The chair’s role was built for a stable world that no longer exists. The Board’s own work is being remade by AI tools that silently invite the substitution of director judgement, and the work the Board governs is being remade by operational AI deployments most directors cannot interrogate. This article works through how Cadbury, the FRC, and the IoD have set out chair responsibilities, none dispensable, all now requiring different execution. The principle that does not move is collective responsibility. The chair polices its boundary, actively, in both states.
Boards have always governed under incomplete information. What the four indicator types offer is not more information but a progressively higher quality of it. Lagging indicators establish what happened, leading indicators signal direction, predictive indicators model possible futures, and reasoned indicators prove what is certain. Applied in combination to a single decision, they represent maximum fidelity — everything knowable and made available before the judgement is made. This article explains why the distinction between a decision made with maximum fidelity and one made without it matters for every director around the table.
The informational asymmetry between management and the Board has always been the central tension of governance. For AI, it is no longer manageable through existing structural checks; the distance is not merely larger than previous technology waves, it is qualitatively different. A director must be able to interrogate maturity claims, assess whether governance is operational or merely presentational, and identify which AI risks are personal development challenges and which are failures of oversight itself. The IoD has formally named the gap. This article defines what closing it actually requires: not technical fluency, but specific capacities for independent evaluation mapped against the governance obligations every director carries, and a diagnostic framework for identifying exactly where the work needs to start.
Llantwit Major |
Published in
AI
| 10 minute read |
The part of AI value that is technological and replicable is also the part that standard progress measures capture best. Pilot counts, budget lines, and strategy documents say nothing about whether the essence of work is genuinely being remade, or whether the three compounding loops are operating. A Board that accepts those reports without probing them is not exercising oversight; it is ratifying a narrative the evidence shows is inflated. This article provides the diagnostic that does: probing questions structured around the data, talent, and process redesign loops, with an interpretive guide to what credible answers look like — and what their absence reveals.
New York |
Published in
AI
and
Board
| 15 minute read |
While organisations transfer decision-making agency to AI systems, accountability remains with humans, yet boards approve AI deployment without investing in the verification capability needed to ensure it. In this article, I demonstrate why this creates a strategic choice with measurable consequences: augmentation preserves expertise pipelines whilst achieving efficiency gains, but replacement destroys capabilities that cannot be rebuilt, turning apparent cost reduction into systematic competitive disadvantage.
Llantwit Major |
Published in
AI
,
Board
and
Cloud
| 10 minute read |
I recently hosted a fireside chat for the AWS Summit EMEA with Intel’s Global Leader for AI Solutions, Monica Livingston. We discussed how Artificial Intelligence (AI) and Machine Learning (ML) are quickly becoming ubiquitous in business. The conversation prompted me to think about how Boards should be thinking about the use of AI and ML in their businesses, and how they need to ensure they are making the right decisions at the speed of light.
After nearly 18 months of study and examinations I was confirmed as having satisfied the requirements of the Institute of Directors (IoD) and admitted as a Chartered Director.
Great news from the Institute of Directors (IoD), today I found out that I have passed the Diploma in Company Direction exam after 8 months of sutdy on the IoD’s Company Direction Programme. This clears the way for me to apply to become a Chartered Director.