Cookie Consent

I use cookies to understand how my website is used. This data is collected and processed directly by me, not shared with any third parties, and helps us improve our services. See my privacy and cookie policies for more details.

AI's Interconnected Challenge: Diagnosing the Six Concerns of the Board

Sydney | Published in AI and Board | 12 minute read |    
A concert hall with a conductor at the podium studying six different musical scores spread before them, with six distinct beams of stage light illuminating different sections of empty orchestra seats, representing the Six Concerns that must be understood as an interconnected system rather than isolated elements (Image generated by ChatGPT 5)

Last week’s article exposed how Boards mistake accumulating AI pilots and their business cases, for building an AI strategy. Many organisations have responded by implementing robust governance frameworks, establishing AI committees, and attempting to address the Six Concerns I outlined in my earlier work on Board priorities for AI governance. They tick governance boxes, follow best practices, and implement ISO 42001. Yet their AI initiatives still fail to deliver strategic transformation.

This paradox – good governance yielding poor outcomes – reveals a fundamental diagnostic error. The Six Concerns aren’t items for sequentially checking off, but an interconnected system where strength in one area without corresponding attention to others creates cascading vulnerabilities. The diagnosis is stark: organisations fragment what must be unified, isolate what must be integrated, and sequence what must be simultaneous.

The pattern repeats across every failed transformation. Strategic Alignment without Ethical Responsibility triggers regulatory crises. Risk Management without Innovation Safeguarding guarantees competitive irrelevance. Financial Impact measurement without Stakeholder Confidence ensures value destruction through resistance. Each concern addressed in isolation doesn’t just fail to solve problems – it actively creates new ones.

This systemic blindness, not individual pilot failures, explains why even well-governed initiatives collapse. The diagnostic power of the Six Concerns mechanism lies not in recognising each concern individually - most Boards already do that - but in understanding how they interconnect, conflict, and cascade. What appears as six separate governance priorities actually operates as a single system where weakness in one area undermines strength in all others.

Strategic Alignment: The multi-speed collision

Strategic Alignment appears straightforward: ensure AI initiatives support business objectives. Deloitte’s 2025 Global Board Survey found that whilst 69% of Boards discuss AI regularly, only 33% feel equipped to oversee AI strategy effectively. This gap manifests not in absence of alignment but in failure to recognise that different business functions naturally advance AI at different speeds.

McKinsey’s research confirms adoption rates varying from over 60% in tech-forward functions to under 20% in traditional operations. Marketing races ahead with generative AI whilst legal cautiously evaluates contract analysis. Customer service transforms with chatbots while finance methodically tests predictive analytics. Each function aligns AI with its own objectives, but not to an overall organisational AI strategy.

The diagnostic insight reveals that Strategic Alignment pursued in isolation creates multi-speed collisions across the organisation. Merchandising AI might excel at predicting trends, but if inventory AI hasn’t prepared for those products, the prediction becomes useless. Marketing AI generates sophisticated demand that supply chain AI cannot fulfil. Customer service AI makes promises that operations AI cannot deliver. Each function’s “successful” alignment actively undermines the others. The core problem isn’t lack of alignment – it’s the assumption that all functions can or should move at the same velocity. This pattern of velocity mismatch ripples through every other concern, creating ethical gaps between fast and slow adopters, throttling innovation where functions lag behind.

This fragmentation compounds when Strategic Alignment ignores other concerns. Fast-moving functions adopting AI without addressing Ethical and Legal Responsibility create compliance gaps. Slow-moving functions focusing on Risk Management without Safeguarding Innovation lose competitive position. The interconnections reveal why project-level alignment, however rigorous, cannot deliver systematic transformation.

Ethical and Legal Responsibility encompasses expanding obligations and regulatory requirements surrounding AI deployment. McKinsey’s latest research reveals that whilst organisations recognise AI risks, fewer than half actively address inaccuracy risks, despite these being the most commonly reported negative consequence. This gap between recognition and action reflects deeper diagnostic challenges.

The pattern emerges when organisations implement ethical frameworks designed for static systems in dynamic AI environments. AI systems that meet all regulations at launch might violate them months later as they learn and adapt. What satisfies fairness requirements in one jurisdiction might discriminate in another. What seems ethically sound in testing might create harm at scale.

Consider how this concern interconnects with others. Ethical frameworks that slow deployment (to ensure compliance) conflict with Safeguarding Innovation (maintaining competitive pace). Legal requirements for explainability clash with Financial and Operational Impact when simpler, more profitable black-box models outperform interpretable alternatives. Privacy protections that build Stakeholder Confidence might prevent the data sharing necessary for Strategic Alignment across functions. As with the multi-speed collisions in Strategic Alignment, organisations find themselves managing different ethical velocities – what’s acceptable in research isn’t in production, what works in one market violates another’s norms.

Deloitte reports 66% of Boards have limited AI knowledge, creating ethical blind spots. Cybersecurity concerns (51%) and privacy issues (43%) top their worry list, yet many lack the expertise to evaluate whether their governance frameworks actually address these risks. This diagnostic gap - governing what they don’t understand - creates false confidence that amplifies the very risks organisations seek to mitigate.

Financial and Operational Impact: The value attribution crisis

Financial and Operational Impact should be the most straightforward concern – measure costs and benefits. Yet McKinsey’s workplace analysis shows 31% of organisations see no cost reduction from AI, whilst 29% actually experience cost increases. This isn’t because AI doesn’t create value but because traditional metrics cannot capture how AI value emerges.

AI creates value through compound effects that defy simple attribution. Network optimisation AI reduces outages, improving customer satisfaction, reducing churn, increasing revenue per user, justifying network expansion, enabling new services. The total value far exceeds individual components, yet project-based evaluation misses these systemic benefits.

This concern’s interconnections compound the measurement challenge. Without Strategic Alignment, departments optimise locally whilst destroying global value – showing excellent project ROI whilst the organisation haemorrhages money. Without stakeholder buy-in, resistance erodes projected benefits before they materialise. Without proper risk assessment, hidden exposures dwarf any visible gains.

This local optimisation echoes the multi-speed fragmentation in Strategic Alignment, whilst unpredictable value emergence exacerbates ethical risks when systems behave unexpectedly. The value attribution crisis intensifies when organisations force automation (cost reduction), personalisation (revenue increase), and innovation (new possibilities) into identical financial frameworks, systematically undervaluing transformation whilst overvaluing incrementalism.

Risk Management: The emergent threat paradox

Risk Management traditionally assumes risks can be identified, assessed, and mitigated through established controls. AI breaks these assumptions. S&P Global’s analysis found 42% of organisations scrapped AI initiatives due to unexpected risk materialisation, up from 17% the previous year. Risks emerge from AI systems in ways traditional frameworks cannot anticipate. These emergent threats compound the ethical uncertainties discussed earlier, whilst the innovation tensions explored later demand risk frameworks that enable rather than paralyse.

AI risks evolve through learning, adaptation, and interaction. Credit scoring models that perform perfectly on historical data might discriminate against emerging demographics. Supply chain AI optimised for efficiency might increase fragility during disruptions. Content moderation trained on current standards might fail when social norms shift. These aren’t failures of risk management but characteristics of systems that learn.

The interconnections amplify complexity. Aggressive Risk Management conflicts with Safeguarding Innovation – the safest AI is often the least valuable. Risk controls that ensure Ethical and Legal Responsibility might prevent the experimentation necessary for Strategic Alignment. Risk mitigation that satisfies regulators might alarm stakeholders, undermining Stakeholder Confidence through visible constraints.

Most critically, Risk Management often ignores the risk of inaction. Whilst organisations deliberate over AI risks, competitors deploy and learn. The diagnostic pattern is clear - organisations become so focused on managing AI risks that they create an even greater risk: competitive irrelevance. NACD’s 2025 survey found 31% of organisations remain unprepared to deploy at scale, not because they lack capability but because risk frameworks paralyse decision-making.

Stakeholder Confidence: The trust multiplier effect

Stakeholder Confidence recognises that AI success depends on trust across employees, customers, regulators, investors, and partners. McKinsey reports 71% of employees trust employers on AI safety, but this confidence erodes quickly when governance gaps become visible.

Stakeholder Confidence reveals a critical diagnostic insight: trust isn’t built through communication but through systematically addressing each group’s specific concerns. Employees need evidence that AI augments rather than replaces them. Customers demand transparency about data usage. Regulators require demonstrable compliance. Investors seek returns without liability. Partners want innovation without risk transfer. This cascades like the multi-speed collisions in Strategic Alignment, amplifying ethical gaps when trust erodes and paralysing risk management when confidence collapses.

These diverse demands create paradoxical requirements when tackled piecemeal. Job security guarantees for employees clash with automation’s financial imperatives. Customer transparency demands conflict with algorithmic competitive advantages. Regulatory compliance satisfying one group triggers alarm in another. Each stakeholder’s lost confidence cascades through the ecosystem – employee doubt breeds customer suspicion, customer concerns alert regulators, regulatory scrutiny spooks investors. Technical triumphs become organisational disasters when this trust multiplier effect turns negative.

Safeguarding Innovation: The velocity imperative

Safeguarding Innovation addresses the fundamental tension between control and creativity. Governance frameworks designed to ensure responsible AI deployment often throttle the experimentation essential for competitive advantage. The diagnostic challenge: how to maintain innovation velocity whilst ensuring appropriate oversight.

Research shows a 68% surge in shadow AI usage, with employees achieving productivity gains that official pilots struggle to match. This reveals an uncomfortable truth: ungoverned AI often delivers immediate value precisely because users select tools that match their actual needs, adapt them to real workflows, and iterate rapidly based on feedback. Marketing teams adopt ChatGPT for content creation, analysts employ Claude for research synthesis, developers integrate GitHub Copilot - all without waiting for official approval. They solve specific problems faster than waiting for official tools and processes.

The pattern exposes how Innovation Safeguarding creates tension with every other concern. Innovation demands uncertainty that Risk Management abhors. It requires experimentation that Ethical frameworks might prohibit. It needs velocity that thorough Financial analysis prevents. It thrives on diversity that Strategic Alignment might discourage. This shadow AI success also highlights risks – ungoverned tools create data leakage (threatening Ethical Responsibility), erode trust when discovered (undermining Stakeholder Confidence), and fragment governance (weakening Risk Management).

Yet these tensions also point toward resolution. Innovation flourishes within frameworks that match governance intensity to risk level, distinguish reversible experiments from permanent commitments, and enable controlled failure rather than demanding perfection. The diagnostic insight here fundamentally reframes the challenge: innovation isn’t safeguarded by reducing governance but by designing governance that accelerates rather than throttles.

The systemic diagnosis

These Six Concerns don’t operate independently – they form an interconnected system where addressing one whilst ignoring others creates new vulnerabilities. The diagnostic power lies not in recognising each concern but in understanding their interactions.

When Strategic Alignment pursues functional objectives without coordinating velocities, multi-speed collisions fragment the organisation. When Ethical and Legal Responsibility implements static frameworks for dynamic systems, compliance becomes illusion. When Financial and Operational Impact uses project metrics for systemic value, transformation appears worthless. When Risk Management applies traditional controls to emergent systems, safety becomes paralysis. When Stakeholder Confidence addresses groups separately, trust spirals destroy value. When Safeguarding Innovation conflicts with other concerns, shadow AI proliferates beyond governance.

92% of companies plan increased AI investment, yet only 1% achieve AI maturity. The diagnosis explains this gap: organisations govern AI through project-level thinking that addresses concerns sequentially rather than systematically. They apply industrial-age frameworks to AI-age technology, command-and-control structures to capabilities requiring emergence and adaptation.

The velocity challenge amplifies every tension. AI evolves faster than quarterly Board cycles can govern. By the time frameworks are approved, technology has advanced. By the time pilots are evaluated, competitors have scaled. Boards aren’t just governing technology but governing acceleration itself – requiring frameworks that anticipate rather than react, that match quarterly oversight with continuous adaptation, that integrate risk registers with predictive tools for emergent threats. This demands leveraging existing Board mechanisms: audit committees for continuous monitoring, dynamic threat tracking through enhanced risk registers, and strategy reviews that account for technological velocity.

From diagnosis to guiding policy

This diagnosis – that project-level governance addressing concerns in isolation creates systemic failure – demands a different approach. Richard Rumelt’s strategy framework teaches that diagnosis must lead to guiding policy, a coherent approach addressing the challenge’s root causes rather than symptoms.

The Six Concerns reveal what must be governed: strategic coherence across different velocities, ethical evolution in dynamic systems, value emergence through compound effects, risk patterns that learn and adapt, stakeholder confidence across diverse groups, and innovation velocity that maintains competitive advantage. But diagnosis alone doesn’t provide the how.

The next article presents the Complete AI Framework as guiding policy that addresses these diagnostic insights. By integrating the Five Pillars of capability building, AI Stages of Adoption for maturity recognition, and Well-Advised for value creation, organisations can govern the Six Concerns as an interconnected system rather than isolated checkboxes. The framework shows how to orchestrate multi-speed adoption without fragmentation, build dynamic ethical frameworks, capture systemic value, manage emergent risks, maintain stakeholder confidence, and safeguard innovation velocity.

This guiding policy enables three-dimensional metrics aligned with each concern – leading indicators track Innovation velocity and early Strategic Alignment signals, lagging measures confirm Financial Impact and Risk outcomes, whilst predictive indicators forecast Stakeholder Confidence trends and Ethical compliance trajectories – bridging diagnosis to policy as explored in next week’s article.

These metrics ensure Boards can track whether their governance truly addresses all six concerns holistically rather than hoping disconnected projects somehow cohere. Because once Boards understand that their Six Concerns form an interconnected diagnostic system, they need a practical approach that will govern these concerns holistically whilst maintaining the agility AI demands - what I call minimum lovable governance. The challenge isn’t choosing between governance and innovation – it’s designing governance that enables innovation whilst addressing all six concerns simultaneously. That’s the strategic imperative facing every Board confronting AI transformation.

Let's Continue the Conversation

Thank you for reading about diagnosing AI governance through the Six Concerns. I'd welcome hearing about your Board's experience with these interconnected challenges - whether you're seeing multi-speed adoption creating collisions, shadow AI outperforming governed initiatives, or success in addressing these concerns systematically rather than sequentially.




About the Author

Mario Thomas is a transformational business leader with nearly three decades of experience driving operational excellence and revenue growth across global enterprises. As Head of Global Training and Press Spokesperson at Amazon Web Services (AWS), he leads worldwide enablement delivery and operations for one of technology's largest sales forces during a pivotal era of AI innovation. A Chartered Director and Fellow of the Institute of Directors, and an alumnus of the London School of Economics, Mario partners with Boards and C-suite leaders to deliver measurable business outcomes through strategic transformation. His frameworks and methodologies have generated over two-billion dollars in enterprise value through the effective adoption of AI, data, and cloud technologies.