Why Boards Need to Watch the EU's General-Purpose AI Code of Practice

Despite industry lobbying for delays, the European Commission has pressed ahead with the EU’s General-Purpose AI Code of Practice (GPAI) — a signal that the governance bar is moving faster than some anticipated. The Code of Practice became effective on 2 August 2025, marking not just another regulatory milestone but a strategic inflection point for Boards worldwide. This voluntary framework, which offers presumption of compliance with the EU AI Act, signals the beginning of a diverging regulatory landscape that will fundamentally reshape competitive dynamics. As I outlined in my earlier article on navigating the AI regulatory maze, we’re moving from mapping the terrain to encountering our first major waypoint — and it’s one that demands immediate Board attention.
The timing couldn’t be more significant. Whilst the EU emphasises legal certainty and reduced administrative burden for signatories, the United States has taken the opposite approach. The U.S. AI Action Plan, released on 23 July 2025, prioritises deregulation and acceleration, creating a “dual-track governance challenge” for global Boards.
For Boards, this divergence isn’t about choosing sides. It’s about simultaneously satisfying EU transparency requirements whilst maintaining U.S. innovation velocity. Those who master this dual-track approach will turn regulatory complexity into competitive advantage. Those who hesitate risk being caught in regulatory whiplash, unable to compete effectively in either market.
The GPAI Code of Practice Decoded: What Boards Need to Know
The Code of Practice, despite being voluntary, carries significant strategic weight. It creates a presumption of compliance with EU AI Act obligations — a powerful incentive that transforms it from optional guidance into strategic necessity for any organisation operating in or serving EU markets.
Developed through an extensive multi-stakeholder process involving over 1,000 contributors from industry, civil society, and academia, the Code of Practice targets foundation models and general-purpose AI systems — the building blocks of modern AI applications. Early adopters like Google have already committed, whilst others like Meta have declined, citing unclear obligations. This split response itself creates competitive dynamics worth watching.
Core Requirements Through a Board Lens
The Code of Practice establishes three pillars of obligation that directly impact Board oversight:
Transparency Obligations form the foundation, requiring providers to complete standardised Model Documentation Forms with 10-year retention periods. These cover capabilities, limitations, training data provenance, compute usage, and energy metrics. For Boards, this isn’t just documentation — it’s about establishing defensible positions on AI development and deployment. The requirement to provide downstream users with clear instructions whilst encouraging public release creates both risk management obligations and trust-building opportunities.
Copyright Clarity emerges as perhaps the most contentious requirement. Providers must establish internal policies respecting opt-out mechanisms, avoid piracy sites, and disclose data sources. The requirement for complaint-redress systems for rights holders has drawn criticism from European creative groups, such as IMPALA and GESAC, who see it as enabling IP theft. While critics argue it enables theft, proponents view it as enabling innovation with necessary safeguards. For Boards, this represents a legal minefield requiring careful navigation between innovation and compliance.
Systemic Risk Management applies to models exceeding 10^25 FLOPS — essentially frontier models like GPT-4 or Claude. This triggers requirements for risk assessments, adversarial testing, incident reporting to the EU AI Office, and comprehensive cybersecurity measures. With an estimated 5-15 providers initially affected, this creates a clear tier of “systemic” AI providers with enhanced obligations.
The timeline ahead is compressed: August 2025 marks effectiveness for new models, February 2026 brings the first compliance review with enforcement for existing models beginning, and ongoing iterative updates promise continuous evolution through harmonised standards by 2027.
The U.S. Counterpoint: Deregulation as Strategy
The U.S. AI Action Plan presents a starkly different vision. Built on three pillars — accelerating innovation, building infrastructure, and advancing leadership — it explicitly prioritises speed over red tape. The rollback of Biden-era regulations, promotion of open-source models, fast-tracking of data centres, and national security carve-outs create an environment optimised for velocity.
The Divergence at a Glance
Dimension | EU Code of Practice | U.S. AI Action Plan |
---|---|---|
Primary Focus | Transparency & compliance | Innovation velocity |
Copyright | Mandatory disclosure & opt-outs | Minimal requirements |
Risk Management | Systemic risk thresholds (>10^25 FLOPS) | National security focus only |
Innovation Approach | Guardrails first | Deregulation first |
Enforcement | February 2026 review cycle | Market-driven outcomes |
Market Philosophy | Trust through transparency | Speed through freedom |
Economic Impact | Estimated 20-40% compliance overhead | $1-2 trillion in projected AI value by 2030 |
This isn’t merely philosophical difference; it’s strategic divergence with immediate practical implications. U.S. firms gain speed advantages with projections of $1-2 trillion in AI economic value by 2030. EU firms will face an estimated 20-40% compliance overhead. The UK finds itself caught between models, attempting to serve as a bridge market. Meanwhile, China observes and adapts, learning from both approaches.
For Boards, this creates an unprecedented dilemma. The EU’s 450 million consumers represent a non-optional market for global businesses. The U.S. innovation ecosystem remains vital for technological advancement. Systems must flex across regimes, creating complexity that can’t be solved by choosing one approach over the other. For instance, companies like Palantir demonstrate the challenges of navigating dual EU-U.S. operations, highlighting potential gaps in compliance and innovation speed.
Why This Matters Now: Three Board Imperatives
As I outlined in my article on AI governance priorities for Boards, six fundamental concerns shape Board oversight of AI. The GPAI Code of Practice amplifies these concerns, but in this dual-track environment, they naturally converge into three paired imperatives that demand integrated responses:
1. Building Stakeholder Confidence Through Strategic Alignment
Early adoption of the Code signals governance maturity to markets increasingly concerned about AI risks. Research indicates that organisations successfully building trust in AI see significant benefits, with investors increasingly viewing transparent AI practices as a proxy for overall management quality. This premium isn’t abstract — it translates into higher valuations, easier capital access, and stronger partnership opportunities.
First-mover advantage extends beyond trust. Early adopters shape implementation standards, influence regulatory interpretation, and establish themselves as responsible innovators. In risk-averse EU markets, this positioning becomes particularly valuable as enterprises seek vendors who can demonstrate both strategic alignment with regulations and stakeholder confidence through transparency.
2. Managing Risk Through Ethical and Legal Responsibility
Documentation requirements for training data create unprecedented transparency obligations that directly impact both risk management and legal exposure. Cross-border data flows, already complex under GDPR, become more intricate when AI training data provenance must be documented and defended. The intersection with intellectual property rights adds another layer — 46 EU CEOs recently warned of “legal grey zones” that risk talent flight to more permissive jurisdictions.
Boards must ensure their organisations develop defensible data strategies that satisfy transparency requirements whilst protecting competitive advantages. This isn’t just about compliance; it’s about maintaining ethical standards and legal responsibility whilst preserving the ability to innovate within evolving notions of data sovereignty and creative rights.
3. Safeguarding Innovation Whilst Optimising Financial Impact
Divergent regulations create inefficiencies that sophisticated organisations can exploit to both safeguard innovation and enhance financial outcomes. Consider how companies like Philips leverage EU standards as a competitive moat, using compliance capabilities to differentiate in markets where trust matters. Others pursue “regulatory arbitrage” — conducting R&D under U.S. frameworks whilst building trust through EU compliance for market entry.
This isn’t about gaming the system; it’s about recognising that different regulatory approaches create different types of value. U.S. markets reward speed and innovation. EU markets value transparency and protection. Organisations that can deliver both gain operational advantages that translate directly to financial performance in both markets.
Practical Board Actions: From Strategy to Implementation
Boards must resist the temptation to build elaborate compliance machinery. Instead, following the principle of ‘minimum lovable governance,’ the focus should be on creating adaptive capabilities that scale with actual risk and opportunity. The GPAI Code of Practice doesn’t require perfection — it requires demonstrable progress and good faith engagement.
Immediate Board Mandate (Next 30 Days)
The Board must mandate a strategic positioning exercise — not a compliance audit. Using the EU’s Model Documentation Form as a reference point rather than a checklist, this exercise should reveal which AI initiatives genuinely trigger Code of Practice requirements and which can proceed unencumbered. This directive should explicitly avoid creating unnecessary documentation for experimental or low-risk systems.
The strategic choice between early adoption, selective compliance, or market-differentiated approaches requires explicit Board-level decision. Frame this as an opportunity to define competitive advantage, not a risk mitigation exercise. For many organisations, ‘good enough’ governance that demonstrates intent may be more valuable than perfect compliance that stifles innovation.
Next Quarter: Board-Directed Priorities
The Board should task the AI Centre of Excellence (AI CoE) with developing adaptive governance capabilities that grow with need:
- Lightweight Documentation: Start with simple model cards for high-risk systems only, building complexity as regulatory scrutiny increases. Most systems need a paragraph, not a dissertation.
- Progressive Governance: Begin with existing risk frameworks, adding AI-specific elements only where current processes fail. Board reporting should focus on material risks and opportunities, not compliance metrics.
This isn’t about building Fort Knox; it’s about creating just enough structure to demonstrate good faith whilst maintaining agility. The multi-speed reality means governance must be equally adaptive — tight where necessary, loose where possible.
Year-Ahead: Board Strategic Oversight
The Board must commission strategic intelligence capabilities, not compliance monitoring. As the Code of Practice evolves, the focus should be on identifying where governance creates competitive advantage versus where it’s simply table stakes.
Competitive differentiation through minimum lovable governance requires Board commitment to pragmatic investment. Build capabilities iteratively, starting with the simplest solution that demonstrates good faith. The goal isn’t comprehensive compliance — it’s strategic positioning that satisfies regulators whilst preserving innovation velocity.
The Broader Strategic Context
The governance premium extends beyond compliance. McKinsey insights show that 71% of employees trust their employers to act ethically with AI, whilst investors increasingly view AI governance as a proxy for overall management quality. In a world where AI drives competitive advantage, governance excellence becomes inseparable from business excellence.
We’re entering a multi-speed reality where different sectors, geographies, and regulators move at different paces. Financial services accelerates in the U.S. whilst moving cautiously in the EU. Healthcare maintains strict controls globally. Consumer technology races ahead with minimal constraints. Boards must build adaptive capacity that functions across these varying speeds.
The massive disparities in AI infrastructure investment — with U.S. hyperscalers committing hundreds of billions whilst the EU proposes limited-scale “Gigafactories” — suggest this divergence will accelerate rather than converge. Adaptive strategy becomes essential: building governance capabilities that can evolve with changing regimes whilst maintaining operational effectiveness.
From Maze to Compass: The Path Forward
The GPAI Code of Practice represents far more than another compliance requirement. It’s a strategic inflection point that will separate leaders from laggards in the AI era. Early movers won’t just comply — they’ll shape how compliance works, gaining advantages that compound over time.
As I argued in my regulatory maze article, successful navigation requires both map and compass. The Code of Practice provides the compass bearing — a clear direction for responsible AI development that balances innovation with protection. But navigation requires active choices about speed, route, and destination.
Boards face fundamental questions: Which AI initiatives warrant early Code of Practice adoption? How can compliance capabilities become competitive advantages? What’s the optimal balance between U.S. innovation velocity and EU trust requirements? How do we build governance muscles that strengthen rather than constrain innovation?
These aren’t questions with universal answers. Each organisation must chart its own course based on markets served, risk appetite, and strategic ambitions. But one thing is clear: hesitation is the riskiest strategy. The regulatory landscape is diverging, competitive dynamics are shifting, and first movers are already staking positions.
Questions for Your Next Board Meeting
As you consider the implications of the GPAI Code of Practice for your organisation, bring these questions to your next Board meeting:
- Have we assessed which of our AI systems would qualify under the Code’s thresholds?
- What would early adoption signal to our stakeholders versus waiting for mandatory compliance?
- How can we turn EU-U.S. regulatory divergence into competitive advantage?
- Do we have the governance infrastructure to support multi-speed, multi-jurisdiction AI deployment?
- What investments in regulatory intelligence and adaptive governance do we need to make now?
Answering these questions isn’t optional — they define whether your Board treats AI governance as a compliance burden or as a source of competitive advantage. The EU’s General-Purpose AI Code of Practice is now in effect. The U.S. is accelerating in the opposite direction. The time for deliberation has passed. Boards must now decide not whether to act, but how to turn this regulatory divergence into strategic advantage.
The organisations that master dual-track governance — satisfying EU transparency whilst maintaining U.S. innovation velocity — will define the next era of AI-driven competition. The question isn’t whether you’ll need to navigate both regimes. It’s whether you’ll lead or follow in establishing how it’s done.
For detailed analysis of the Code’s requirements and implementation guidance, visit the EU’s Digital Strategy portal. For perspectives on the U.S. AI Action Plan, see the White House AI resources.
Let's Continue the Conversation
Thank you for reading my perspective on navigating the EU's GPAI Code and the emerging dual-track governance challenge. If you'd like to explore how your Board can turn regulatory divergence into competitive advantage—or discuss practical approaches to AI governance that balance compliance with innovation—I'd welcome a conversation.
About the Author
Mario Thomas is a transformational business leader with nearly three decades of experience driving operational excellence and revenue growth across global enterprises. As Head of Global Training and Press Spokesperson at Amazon Web Services (AWS), he leads worldwide enablement delivery and operations for one of technology's largest sales forces during a pivotal era of AI innovation. A Chartered Director and Fellow of the Institute of Directors, and an alumnus of the London School of Economics, Mario partners with Boards and C-suite leaders to deliver measurable business outcomes through strategic transformation. His frameworks and methodologies have generated over two-billion dollars in enterprise value through the effective adoption of AI, data, and cloud technologies.