The Board in the machine

Llantwit Major | Published in AI, Board and Cloud | 10 minute read | Share this post on X Share this post on LinkedIn Share this post by Email Copy the Permalink to this post

Modern boardroom with subtle technological elements, featuring minimal digital overlays like abstract data streams and holographic charts, symbolizing the integration of technology in decision-making. The image conveys sophistication and innovation in business governance. (Image generated by ChatGPT 4o)

I recently hosted a fireside chat for the AWS Summit EMEA with Intel’ Global Leader for AI Solutions, Monica Livingston. We discussed how Artificial Intelligence (AI) and Machine Learning (ML) are quickly becoming ubiquitous in business. The conversation prompted me to think about how Boards should be thinking about the use of AI and ML in their businesses, and how they need to ensure they are making the right decisions at the speed of light.

Boards need to have an awareness of how AI and ML is being used in their business workflows, and the safeguards and assurances needed to effectively govern their use. Moreover, they need to have at least an awareness of the implications of the use of AI and ML if they are to effectively embed, and increase adoption of AI and ML in the business.

Why should Boards care?

Directors’ power is given to them collectively as a Board, and that power is usually delegated to committees of Directors, the Managing Director, other Executive Directors, and the people that report to them. However, ultimately it is the Board collectively, and Directors individually who remain legally responsible for the decisions made with those delegated powers.

From chat bots and product recommendations, to predictive maintenance, credit underwriting decisions and recruitment, businesses are making greater use of AI and ML to speed up decisions and improve customer experience. The benefits of using AI and ML to maintain and even extend competitive advantage are numerous, but if the wrong decision is made, Directors could find themselves exposed to legal ramifications and liability unless they can demonstrate the necessary checks and controls were in place.

Five Key Questions

Whenever I’m asked about the adoption of AI and ML in business from the perspective of the c-suite I work on the basis that executives wish to make ‘right’ decisions; that is, decisions that are made responsibly, and in keeping with high business, ethical, and moral standards.

On this basis, I boil things down to getting answers to five key questions:

1. Is the business currently making use of AI and ML, and what oversight of its use is currently in place?

As a first step, it’s important to understand where AI and ML are being used in the business today, and what purpose they are serving. This is achieved through a discovery exercise in conjunction with the IT organisation and broader line of business leaders. Extending discovery beyond IT should help in capturing use cases in shadow IT functions or in business units where consumption is via a cloud service provider and potentially ‘not on the books’. From the collated discovery data, a RACI matrix of stakeholders for each AI and ML application is created, and a survey completed by them to understand their requirements and need for using AI and ML. This information will form the basis of an ‘AI/ML Register’; much like an asset register.

2. How do I satisfy myself that our use of AI and ML is transparent, compliant, fair, and safe?

Once an AI/ML Register has been established, each entry should be evaluated to ensure it is transparent, compliant, fair, and safe. Transparent means that the person responsible can demonstrate that the models used are thoroughly tested, that decisions made can be explained and justified, and that the AI and ML can stand up to the same or similar levels of scrutiny as would be applied to the process before AI and ML were introduced. Compliant means that to operate, the AI and ML application is doing so in a way that conforms to organisational standards and best practice. Fair means that the data used to train the AI and ML models are free of bias and that the decisions where they impact individuals, are human-centred, explainable, and justifiable. Safe means that use of AI and ML does not have an adverse effect on the environment, individuals, or other businesses. It also means that the data used in the models should be secured in the same way and to the same (or better) standards as if AI and ML were not used.

3. What criteria should we apply when making a decision to use or not use AI and ML for a business workflow?

The criteria used to evaluate the use of AI and ML should be documented in an organisation-wide ‘AI/ML Charter’. The criteria should be developed in collaboration with the data scientists and software engineers who implement use of AI and ML across the business but have a heavy focus on the principles of transparency, compliance, fairness and safety. The AI/ML Charter should be public and shared broadly across the business; it should be used as a tool to build trust in the use of AI and ML, and drive innovation and appropriate use of AI and ML, not be a blocker to it.

4. How do we guide our business users in selecting the right mix of AI and ML versus traditional ways of achieving the same outcome?

The cloud has democratised access to AI and ML services that were previously only available to organisations with multi-million-dollar budgets. This ease of access should not translate to unchecked use of AI and ML. The AI/ML Charter should drive good behaviour and make it clear to the business that selecting the right solution and approach is more important than selecting a technology because it is in vogue to do so. Satisfying the business requirements should far outweigh a need to be seen to be doing something cool.

5. Is using AI and ML compatible with our environmental, social, and governance (ESG) agenda?

A key consideration in any use of AI and ML is the impact its use has on the environment. AI and ML applications can sometimes be more resource intensive in terms of carbon footprint than their human equivalent. The AI/ML Charter should make it clear where the balance should be struck. Socially, businesses should ask themselves if use of AI and ML is human centred and fair. If it is not, then it should not be used. Finally, use of AI and ML should be aligned with the governance principles of the organisation and any external compliance and governance requirements it must adhere to.

The Role of the Board

Governance of the use of AI and ML should be the responsibility of the Board and its Directors rather than of IT or lines of business. Specific powers should be given to the Audit and Risk Committee to act as the oversight function for all AI and ML use in the organisation and to provide regular reporting to all Directors at Board meetings on any aspects of the use of AI and ML that do not meet the organisations AI/ML Charter.

As businesses become more complex, interconnected, and responsive to competitive threats, they will need to make many more decisions with much greater speed than ever before. AI and ML will continue to grow in use in business and become a critical tool for differentiation in a crowded marketplace. With the right approach to governing the use of AI and ML in their businesses, Boards will find that there are no barriers to making the right decisions at the speed of light.

About the Author

Mario Thomas is a seasoned professional with over 25 years of experience in web technologies, cloud computing, and artificial intelligence. In his role as the Head of the Global Trainer Centre of Excellence and Press Spokesperson at Amazon Web Services (AWS), Mario develops executive training programs and AI sales enablement strategies worldwide. He is a Chartered Director and a Fellow of the Institute of Directors, providing valuable insights to Board Directors and senior executives on leveraging technology for organisational transformation.