Cookie Consent

I use cookies to understand how my website is used. This data is collected and processed directly by me, not shared with any third parties, and helps us improve our services. See my privacy and cookie policies for more details.

Minimum Lovable Governance: The AI Operating Principle Boards Should Use

London | Published in AI and Board | 13 minute read |    
A lightweight metal arbour frames an open pathway through a landscaped garden at dawn, representing governance as structure that guides and supports growth rather than constrains it (Image generated by ChatGPT 5)

The concept of minimum lovable governance emerged from a frustration I suspect many readers share: governance that is heavy where it should be light, and light where it should be heavy. I’ve referenced the concept across several articles, and now I want to explain what it means and why it’s the operating principle that makes AI governance work when traditional approaches simply get routed around.

Organisations build elaborate approval processes for low-risk AI experiments while leaving high-stakes autonomous systems to operate with minimal oversight. They create comprehensive policy documents that no one reads while failing to embed practical guidance where decisions actually happen. The result is governance that creates friction without providing assurance, the worst of both worlds.

Minimum lovable governance offers a different approach. It borrows from Eric Ries’s progression in product thinking, documented in The Startup Way: the evolution from Minimum Viable Product (the smallest thing you can ship to learn) to Minimum Lovable Product (the smallest thing customers will actually embrace). Applied to governance, this means building the smallest system that achieves necessary guardrails and that people actually want to engage with.

This distinction provides a useful way to frame the governance challenge. Heavyweight governance frameworks get complied with reluctantly or routed around entirely; governance that is lovable gets used voluntarily. When more than 80% of employees — including nearly 90% of security professionals — use unapproved AI tools in their jobs according to UpGuard’s November 2025 report, the difference between compliance and adoption becomes strategically significant. People route around unloved governance, finding workarounds and operating in shadows. The governance exists on paper but fails to govern in practice.

The problem with traditional governance

Anyone who has been through a compliance audit recognises the pattern. My own experience with PCI audits, starting in 2009, followed a familiar rhythm: weeks of preparation, document gathering, and evidence compilation all compressed into an intense period before the auditor arrived. Maximum effort concentrated into minimum time, producing maximum disruption. The same pattern plays out across SOC 2, ISO certifications, and countless other compliance regimes — annual scrambles that consume disproportionate energy for periodic assurance.

The logic seems reasonable: focus resources when they’re needed, demonstrate compliance when it matters. But this episodic approach creates perverse outcomes. Compliance becomes a point-in-time snapshot rather than an ongoing reality, with evidence reflecting what organisations can retrospectively document rather than what they actually do. The audit passes, everyone exhales, and attention shifts elsewhere until the next audit comes along.

This pattern extends beyond formal audits into how organisations approach AI governance more broadly. Quarterly review boards accumulate backlogs between sessions, approval committees create bottlenecks that incentivise workarounds, and risk registers get updated before board meetings rather than when risks actually change. The governance calendar drives activity rather than operational reality driving governance.

The result is governance that feels heavy to practitioners — consuming time, creating delays, requiring documentation — while providing uncertain assurance to Boards. Directors receive reports suggesting everything is under control, but the shadow AI phenomenon tells a different story: when the majority of employees use AI tools outside formal governance, something fundamental has broken. The governance isn’t governing.

What “lovable” actually means

The word “lovable” in the governance context isn’t emotional — it’s operational. Governance becomes lovable when it possesses specific characteristics that make it something people want to use rather than something they endure.

Minimum lovable governance is embedded in how work happens rather than imposed on top of it. When governance exists as a separate activity — forms to fill, approvals to seek, documentation to produce — it becomes friction. When governance is woven into tools and workflows, it becomes structure people move through naturally rather than something they bump into. The developer who receives automated risk assessment as they deploy a model experiences governance differently from one who must schedule a review board meeting and wait three weeks.

Minimum lovable governance is continuous rather than episodic. Instead of concentrating assurance activities into defined review periods, it distributes governance across time so that compliance is always current. The organisation can demonstrate its governance posture at any moment, not just after weeks of preparation. When the regulator calls tomorrow, the answer comes in hours, not weeks.

Minimum lovable governance is proportionate to risk. Not every AI use case requires the same controls — a customer-facing credit decisioning system demands different governance than an internal productivity chatbot summarising meeting notes. Proportionality isn’t about doing less governance; it’s about matching governance intensity to actual risk, ensuring that high-stakes applications receive appropriate scrutiny while low-risk experiments can proceed without unnecessary friction.

Minimum lovable governance provides clarity at the point of decision. Policy documents sitting on SharePoint don’t govern behaviour. What governs behaviour is the guidance people receive when they’re actually making choices — contextual, specific, actionable. Should I use this customer data for training? Can I deploy this model to production? What approvals do I need for this use case? Minimum lovable governance answers these questions where and when they arise.

The test is straightforward. If people are routing around your governance, it isn’t governance — it’s documentation. If they’re using it grudgingly just to get the job done, it’s tolerable but fragile. If they’re embracing it because it enables them to innovate and move faster, you’ve achieved minimum lovable governance.

Why this is possible now

Minimum lovable governance isn’t just a nicer philosophy — it’s an operating model that has only recently become viable. Three developments make this the moment when the approach moves from aspiration to reality.

First, regulatory architecture increasingly demands proportionality. The EU AI Act’s risk-tiered structure explicitly requires governance that scales with risk rather than applying uniform controls — high-risk AI systems operating in employment, credit, or law enforcement contexts require comprehensive documentation, human oversight, and conformity assessment, while lower-risk applications face proportionately lighter obligations. Even where comprehensive legislation hasn’t emerged, the direction of travel is clear: China’s AI Safety Governance Framework emphasises ‘categorised and tiered management,’ and the US NIST AI Risk Management Framework builds on proportionate, context-specific controls. This regulatory convergence creates both permission and pressure for organisations to match governance intensity to risk — precisely what minimum lovable governance calls for.

Second, standards are maturing to provide reference points. ISO 42001 establishes requirements for AI management systems, offering organisations a destination for comprehensive governance capability. ISO/IEC 42006:2025 adds auditing and certification requirements, making the pathway to formal certification more concrete. These standards create benchmarks against which organisations can assess their governance maturity and chart progressive improvement.

Third, and most significantly, AI-assisted governance is now viable. When AI systems make millions of decisions per second rather than the hundreds of decisions per day that human-led processes can manage, traditional governance simply cannot keep pace. The same technology being governed must assist in the governing. This represents the fundamental shift that makes minimum lovable governance achievable at scale.

Consider what continuous governance previously required: human reviewers monitoring every model deployment, manually checking compliance with policies, documenting decisions in real-time, and maintaining audit trails across thousands of interactions. The resource intensity made this impossible for any organisation operating AI at scale — continuous governance remained theoretically desirable but practically unachievable.

AI changes this calculus. Systems can now monitor AI behaviour in real-time, flagging anomalies and policy violations automatically while generating compliance documentation as a by-product of operation rather than a separate administrative burden. Contextual guidance can be provided at the point of decision, drawing on policy repositories that would take humans hours to search, and lineage and audit trails can be maintained without manual effort.

This isn’t about removing humans from governance — it’s about transferring the operational burden while preserving human accountability. The Board remains responsible for governance decisions, and humans remain in the loop for judgement calls: the edge cases, the ethical dilemmas, the novel situations that require contextual understanding. But the administrative machinery that made traditional governance so burdensome transfers to AI systems.

In my previous article on accountability, I’ve explored how organisations are transferring agency for decision-making from humans to AI systems, while accountability cannot be similarly transferred. The same principle applies to governance itself: organisations can delegate the operational mechanics — the monitoring, checking, documenting, flagging — while retaining human accountability for the governance framework and its outcomes. This is transfer of agency, not transfer of accountability.

Five principles for minimum lovable governance

When I think about minimum lovable governance, I find it helpful to frame it through five principles — each with a test question that Boards can ask to assess their current state.

Governance should be continuous, not episodic. Traditional governance concentrates assurance activities into defined review periods — annual audits, quarterly boards, monthly reports. This creates cycles of neglect and scramble, where governance receives intense attention periodically but limited attention between cycles. Minimum lovable governance distributes assurance continuously, so compliance is always current rather than retrospectively demonstrated. Deloitte’s 2025 Trustworthy AI research finds that real-time governance significantly improves regulatory readiness and reduces compliance remediation costs.

Here’s what to ask: If an auditor called tomorrow, how long would it take to demonstrate compliance — seconds or weeks?

Governance effort should be proportionate to risk. Not every AI application carries the same risk. A model recommending internal meeting times poses different governance challenges than one making credit decisions affecting customers’ financial futures. Proportionality means matching governance intensity to actual risk — not applying uniform heavyweight processes to everything, and not leaving high-risk applications under-governed while bureaucracy accumulates around low-risk experiments. Deeploy’s 2025 framework targets seven control layers specifically at high-risk AI Act categories, demonstrating how proportionality can be operationalised. The EU AI Act’s entire architecture embeds this principle.

Here’s what to ask: Do we apply the same approval process to a marketing chatbot as we do to a credit decisioning system?

Governance should be embedded in work, not imposed on top of it. When governance exists as a separate activity — additional forms, external approvals, parallel documentation — it becomes something people do in addition to their work. This creates friction that incentivises workarounds and drives activity into the shadows. When governance is embedded in tools and workflows, it becomes part of how work happens rather than an interruption to it. PwC’s 2025 Responsible AI Survey shows that adaptive governance practices correlate with 30–40% faster innovation cycles — suggesting that embedded approaches accelerate rather than impede progress.

Here’s what to ask: Do our AI teams experience governance as part of their workflow or as an interruption to it?

Governance should exploit the technology it governs. AI systems can monitor AI systems. Using AI for governance enables continuous oversight that would be resource-prohibitive with purely human processes — real-time monitoring, automated compliance checking, contextual guidance at scale. This principle distinguishes minimum lovable governance from simply doing less governance. The ambition isn’t reduced oversight but transformed oversight, using technology to achieve assurance levels that manual approaches cannot match.

Here’s what to ask: What percentage of our governance activities (not just AI governance) are themselves AI-assisted?

Accountability remains with humans; operational burden can transfer to AI. Automating governance mechanics doesn’t mean automating accountability. Someone must remain answerable for governance outcomes, and that someone is invariably human. For Boards, this carries particular weight: directors remain jointly and severally liable for governance failures, regardless of how much operational machinery has been delegated to AI systems. Humans handle judgement calls and exceptions. But the administrative machinery — the monitoring, documenting, checking, flagging — can transfer to AI systems designed for that purpose.

Here’s what to ask: Is it clear who is accountable for AI governance decisions, even when AI systems assist in monitoring and compliance?

Where minimum lovable governance fits

Readers familiar with my work will recognise how minimum lovable governance connects to frameworks I’ve discussed previously. It operates across all Six Board Concerns — the interconnected governance priorities that Boards must address holistically rather than sequentially. It scales appropriately with the AI Stages of Adoption, providing lighter governance for organisations experimenting with initial pilots and more comprehensive governance for those scaling AI across ecosystems. It enables capability building across the Five Pillars without creating bureaucratic overhead that paralyses progress.

But minimum lovable governance isn’t another framework to implement. It’s an operating philosophy that makes existing frameworks work. The question isn’t whether to adopt the Six Concerns or progress through the Stages of Adoption — those describe what governance must address. The question is how to design governance that achieves those objectives while remaining something people actually want to use and do.

For organisations considering formal certification, minimum lovable governance provides the progressive path toward ISO 42001 readiness. Most organisations aren’t ready for comprehensive AI management system certification — attempting it prematurely would create exactly the kind of heavyweight governance that people route around. Minimum lovable governance builds the organisational muscle that makes standards compliance achievable when you’re ready for it, starting with embedded, proportionate, continuous approaches and maturing toward comprehensive capability over time.

The strategic choice

The question facing Boards isn’t whether to govern AI — that decision has been made by regulators, by stakeholders, and by the operational risks that ungoverned AI creates. The question is whether governance will be something the organisation does or something the organisation is.

Traditional governance approaches treat oversight as a separate function, administered by compliance teams, documented in policy libraries, reviewed in periodic committees. This governance exists parallel to operations, occasionally intersecting when approvals are needed or incidents occur. It satisfies the formal requirement to have governance while leaving actual practice largely unchanged.

Minimum lovable governance takes a different view. It treats governance as organisational capability — embedded in how work happens, continuous in operation, proportionate to risk, enabled by the technology it governs. This isn’t governance as constraint but governance as infrastructure, providing the foundation for confident AI deployment rather than creating friction that slows it.

The shadow AI phenomenon reveals which approach organisations have actually chosen, regardless of what their policies claim. When the majority of employees use AI outside formal governance, the organisation has governance on paper but not in practice. When governance is so embedded that working outside it requires more effort than working within it, the organisation has achieved something different — governance that governs because people want to use it, not because they’re forced to comply.

That’s minimum lovable governance. Not the smallest amount of governance you can get away with, but the smallest governance system that achieves necessary guardrails while being something people actually embrace. The research suggests this approach correlates with faster innovation, higher trust, and better regulatory readiness. More fundamentally, it’s the only approach that works when AI operates at speeds and scales that traditional oversight cannot match.

The choice is yours. Build governance people route around, or build governance people love to use. The outcomes will differ accordingly.

Let's Continue the Conversation

Thank you for reading about minimum lovable governance as the operating principle for AI that Boards can actually use. I'd welcome hearing about your organisation's experience with AI governance - whether you're frustrated by governance that creates friction without assurance, discovering shadow AI that reveals where formal processes have failed, or exploring how to make governance something people embrace rather than route around.




About the Author

Mario Thomas is a Chartered Director and Fellow of the Institute of Directors (IoD) with nearly three decades bridging software engineering, entrepreneurial leadership, and enterprise transformation. As Head of Applied AI & Emerging Technology Strategy at Amazon Web Services (AWS), he defines how AWS equips its global field organisation and clients to accelerate AI adoption and prepare for continuous technological disruption.

An alumnus of the London School of Economics and guest lecturer on the LSE Data Science & AI for Executives programme, Mario partners with Boards and executive teams to build the knowledge, skills, and behaviours needed to scale advanced technologies responsibly. His independently authored frameworks — including the AI Stages of Adoption (AISA), Five Pillars of AI Capability, and Well-Advised — are adopted internationally in enterprise engagements and cited by professional bodies advancing responsible AI adoption, including the IoD.

Mario's work has enabled organisations to move AI from experimentation to enterprise-scale impact, generating measurable business value through systematic governance and strategic adoption of AI, data, and cloud technologies.