Cookie Consent

I use cookies to understand how my website is used. This data is collected and processed directly by me, not shared with any third parties, and helps us improve our services. See my privacy and cookie policies for more details.

AI is transforming governance: Six key Boardroom priorities

London | Published in AI and Board | 10 minute read |    
The image shows a futuristic boardroom with diverse professionals engaged in discussion around a central table, surrounded by holographic AI displays showing analytics and decision metrics, set against a bright cityscape and greenery, symbolizing collaboration, innovation, and ethical AI governance. (Image generated by ChatGPT 4o)

The rapid advancement of artificial intelligence is fundamentally changing the velocity of business decision-making and how organisations operate, compete, and create value. With AI, boards are moving from overseeing hundreds of decisions made per day to millions made per second - and they must be confident that each of those decisions is transparent, explainable, and correct.

This new reality demands a governance framework that matches both the speed and scale of AI-driven decision-making. While developing the Well-Advised Framework and working with boards across multiple industries, from global enterprises to private equity portfolio companies, I’ve observed how traditional governance frameworks struggle to address these unique challenges. The approaches that worked for overseeing traditional technology implementations simply aren’t sufficient for AI.

Six Priorities for AI Governance

Through hundreds of conversations with boards navigating the AI landscape, I’ve identified six critical areas of concern that must be addressed to effectively govern AI initiatives. These priorities emerged from real-world experiences helping organisations implement AI governance structures that actually work, rather than theoretical models that look good on paper but fail in practice.

Strategic Alignment: Steering the AI Journey

AI isn’t just another technology implementation - it’s a fundamental reshaping of how organisations make decisions, serve customers, and create value. During my time at AWS, I’ve seen how organisations repeatedly struggle when they treated AI as merely a technical initiative. The most successful implementations occur when boards actively shape how AI supports and advances their organisation’s long-term vision.

For instance, one manufacturing customer I worked with initially focused solely on AI for predictive maintenance but soon realised there were opportunities to reduce error rates in production that were being reported by channel partners (thereby improving customer satisfaction and decreasing the cost of production), and to identify opportunities to provide additional training to engineers in the field to standardise resolution of recurring faults – thereby reducing remediation and ongoing support costs. Often AI has impact in multiple dimensions in an organisation, and a narrow focus on one outcome often hides other benefits.

The challenge for many boards lies in balancing short-term operational improvements with long-term strategic transformation. This strategic perspective is essential, but it must be balanced with ethical considerations.

When AI makes a decision, the board is making that decision. The speed may be different, but the accountability remains the same. Through my work developing governance frameworks at AWS, I’ve seen how critical it is to have mechanisms that can enforce ethical guidelines at machine speed.

This was brought home while working with an insurance company implementing AI for claims processing. The board was initially concerned about potential bias in AI decision-making - a valid concern given the importance of fair treatment in claims assessment. However, when they implemented robust monitoring and analysis capabilities, they discovered something unexpected: the bias in manual claims processing was actually more pronounced than in their AI model. We often hear that AI needs to achieve human-level reasoning, but in this case, AI raised the bar by demonstrating more consistent, measurable fairness in decision-making than its human counterparts. This finding helped shape their governance approach, leading them to apply similar scrutiny and controls to both human and AI decision-making processes.

The implementation of ethical AI principles requires more than just guidelines - it needs practical, operational frameworks that can scale. Drawing from my experience with the Well-Advised Framework, successful organisations build these considerations into their AI systems from the start, rather than trying to bolt them on later. The insurance company’s experience shows how proper governance can not only protect against AI risks but also highlight opportunities to improve existing processes.

Financial and Operational Impact: Beyond the Balance Sheet

AI doesn’t just change how we work - it changes the economics of how we create value. Traditional ROI metrics often fail to capture the full impact of AI initiatives. This mirrors what I saw in my early years at AWS, when I created a tool for building cloud business cases that went beyond measuring just total cost of ownership and later co-authored the Cloud Value Framework.

As I detailed in my recent article on measuring AI value, boards and executives can easily get lost in a sea of metrics - from technical performance indicators to total cost of ownership calculations. The Well-Advised Framework provides a structured approach for measuring this broader value creation, considering both immediate operational efficiencies and longer-term strategic advantages. In my work with boards, I’ve found this comprehensive view essential for making informed decisions about AI investments.

The current AI landscape offers numerous opportunities for low-cost, low-friction pilot projects that can demonstrate value quickly. While boards regularly make strategic decisions under uncertainty based on their risk appetite, AI has low barriers to entry and provides an opportunity to foster an experimental culture that can help organisations build confidence through evidence-based learning. While measuring value is crucial, managing the risks of AI at speed presents its own unique challenges.

Risk Management: Governing at AI Speed

The velocity and scale of AI-driven decisions create new types of risks that traditional governance frameworks struggle to address. Working with boards across multiple industries, I’ve seen organisations grapple with scenarios they never previously considered - like an AI making thousands of potentially biased decisions before anyone notices.

This new reality demands a fundamental rethinking of risk governance structures. One of the most critical decisions is where to position the AI Centre of Excellence (CoE) within the organisation. I’ve consistently advocated that the AI CoE should report directly to the board via the risk and compliance committee, not to IT. This isn’t just another technology function - it’s a core governance mechanism for overseeing decision-making at machine speed.

When the AI CoE reports through IT, organisations often focus too narrowly on technical implementation while missing broader strategic and governance considerations. By contrast, boards that position their AI CoE with direct risk committee oversight are better equipped to monitor and manage the full spectrum of AI risks - from model bias and decision accuracy to ethical implications and regulatory compliance. This structure ensures that AI governance gets the board-level attention it requires while maintaining the independence needed for effective oversight.

It’s important to develop a risk management approach that matches the speed of AI decision-making while maintaining robust oversight. This includes real-time monitoring systems, comprehensive decision audit trails, and rapid response protocols - all overseen by an AI CoE that has the authority and independence to enforce governance standards effectively.

Stakeholder Confidence: Building Trust

Trust in AI is earned in drops but lost in buckets. I’ve observed striking parallels between early cloud adoption and current AI adoption. The patterns of stakeholder resistance are remarkably similar, yet AI adds new layers of complexity.

In the early days of cloud adoption, organisations faced significant stakeholder resistance. IT teams worried about job security, executives questioned data security, and boards were concerned about losing control. Today, I’m seeing these same patterns with AI adoption, but with an additional dimension: AI systems are perceived as not just handling tasks, but ’thinking like humans.’ This creates a deeper level of both fascination and concern among stakeholders.

However, I’ve consistently seen that human involvement remains essential for effective AI adoption. Just as successful cloud transformations weren’t about replacing IT teams but enabling them to work more effectively, AI implementation isn’t about replacing human decision-making - it’s about augmenting and enhancing it. The concept of ‘human in the loop’ isn’t just a technical requirement; it’s fundamental to building stakeholder confidence.

Successful organisations create transparent communication about both the opportunities and challenges of AI adoption. This includes clear communication with customers about how AI is being used, engaging employees in the transformation process, and keeping investors informed about AI strategy and governance. Most importantly, it involves demonstrating how AI and human expertise work together to create better outcomes than either could achieve alone.

Safeguarding Innovation: Managing “Shadow AI”

Shadow AI isn’t just about unauthorised technology use - it’s about unauthorised decision-making at scale. I’ve watched shadow IT evolve into shadow cloud computing, but shadow AI presents even greater risks. When employees use public AI tools to process company data or generate intellectual property, they’re not just creating security risks - they’re potentially compromising the organisation’s IP position.

A critical question boards must address is: who owns the intellectual property generated using public AI models? When employees use these tools to create content, code, or business solutions, the ownership of that output can be unclear. This isn’t just a theoretical concern - organisations are discovering that valuable intellectual property was being developed using public AI tools, potentially compromising their competitive advantage and creating complex legal questions about ownership and rights.

This requires clear policies governing AI tool usage and robust protection of proprietary data and models. Successful organisations create governance structures that promote responsible innovation while protecting their intellectual assets. This means establishing clear boundaries for AI tool usage, understanding the terms of service for public AI models, and creating safe spaces for innovation that don’t compromise IP rights or data security.

The goal isn’t to stifle innovation but to channel it effectively. Just as organisations learned to manage shadow IT through sanctioned alternatives and clear governance, they need to create frameworks that allow for AI experimentation while maintaining appropriate controls. This becomes increasingly critical as open-source models become more widely available and accessible to employees across the organisation.

The challenges of shadow AI and IP protection highlight why a systematic implementation approach is crucial. While the six areas we’ve explored provide the framework for governance, turning this framework into operational reality requires careful orchestration.

Putting the Framework into Action

Implementing effective AI governance requires a systematic approach. Boards should begin by assessing their organisation’s current AI maturity and capabilities, using this as a baseline for developing appropriate governance structures and reporting lines. Regular review cycles should be established to monitor progress and adjust policies as needed, with clear metrics for measuring success. Incident response protocols must be developed and regularly tested to ensure the organisation can respond quickly to AI-related challenges. I recommend boards start with three key steps:

First, assess your organisation’s current state using the AISA framework to understand your AI maturity level. This baseline assessment should examine not just technical capabilities, but also governance structures, risk management processes, and stakeholder engagement mechanisms.

Second, establish your AI Centre of Excellence with direct reporting lines to the board via the risk and compliance committee. The CoE should be empowered to:

Third, implement a regular review cycle that includes:

The key to successful implementation is maintaining balance - between innovation and control, between speed and governance, between autonomy and oversight. Regular adjustments to your approach based on lessons learned will help refine your governance model over time.

Conclusion

Effective AI governance requires boards to think differently about oversight, control, and value creation. The framework outlined above provides a starting point, but each organisation must adapt it to their specific context and needs. Success in the AI era requires boards to be both guardians and enablers - protecting the organisation while fostering innovation. By addressing these six key areas of concern, boards can create governance structures that scale with AI’s capabilities while ensuring responsible and value-creating deployment.

Let's Continue the Conversation

I hope this article has provided useful insights about building a framework for AI governance. If you'd like to discuss how these concepts apply to your organisation's specific context, I welcome the opportunity to exchange ideas.




About the Author

Mario Thomas is a transformational business leader with nearly three decades of experience driving operational excellence and revenue growth across global enterprises. As Head of Global Training and Press Spokesperson at [Amazon Web Services](https://aws.amazon.com) (AWS), he leads worldwide enablement delivery and operations for one of technology's largest sales forces during a pivotal era of AI innovation. A Chartered Director and Fellow of the [Institute of Directors](https://www.iod.com), Mario partners with Boards and C-suite leaders to deliver measurable business outcomes through strategic transformation. His frameworks and methodologies have generated over two-billion dollars in enterprise value through the effective adoption of AI, data, and cloud technologies.