Minimum Lovable Governance
The Governance Paradox
More than 80% of employees - including nearly 90% of security professionals - use unapproved AI tools in their jobs. Yet most organisations have AI governance policies. Comprehensive ones. Carefully documented. Rarely consulted.
This paradox reveals something important: governance that exists on paper but gets routed around in practice isn’t governance. It’s documentation.
The problem isn’t lack of governance - it’s governance designed as a separate activity rather than structure woven into how work actually happens. Elaborate approval processes for low-risk experiments. Minimal oversight for high-stakes autonomous systems. Weeks of preparation compressed before audits. Maximum effort, maximum disruption, minimum actual governance of AI behaviour.
Minimum lovable governance offers a different approach.
From Viable to Lovable
The concept borrows from Eric Ries’s progression in product thinking: the evolution from Minimum Viable Product (the smallest thing you can ship to learn) to Minimum Lovable Product (the smallest thing customers will actually embrace).
Applied to governance, this distinction matters enormously. Heavyweight governance frameworks get complied with reluctantly or routed around entirely. Governance that is lovable gets used voluntarily. When people route around your governance, finding workarounds and operating in shadows, the governance exists on paper but fails to govern in practice.
Minimum lovable governance means building the smallest system that achieves necessary guardrails and that people actually want to engage with. Just enough structure to demonstrate good faith whilst preserving the agility that makes AI valuable.
Four Characteristics
Embedded rather than separate. When governance exists as forms to fill and approvals to seek, it becomes friction. When governance is woven into tools and workflows, it becomes structure people move through naturally. The developer who receives automated risk assessment as they deploy a model experiences governance differently from one who must schedule a review board meeting and wait three weeks.
Continuous rather than episodic. Instead of concentrating assurance activities into defined review periods, minimum lovable governance distributes oversight across time so that compliance is always current. When the regulator calls tomorrow, the answer comes in hours, not weeks.
Proportionate to risk. A customer-facing credit decisioning system demands different governance than an internal chatbot summarising meeting notes. Proportionality isn’t about doing less governance - it’s about matching governance intensity to actual risk, ensuring high-stakes applications receive appropriate scrutiny while low-risk experiments proceed without unnecessary friction.
Clarity at the point of decision. Policy documents sitting on SharePoint don’t govern behaviour. What governs behaviour is the guidance people receive when they’re actually making choices. Should I use this customer data for training? Can I deploy this model to production? Minimum lovable governance answers these questions where and when they arise.
The Test
The test is straightforward. If people are routing around your governance, it isn’t governance - it’s documentation. If they’re using it grudgingly just to get the job done, it’s tolerable but fragile. If they’re embracing it because it enables them to innovate and move faster, you’ve achieved minimum lovable governance.
The shadow AI phenomenon reveals which approach organisations have actually chosen, regardless of what their policies claim. When the majority of employees use AI outside formal governance, the organisation has governance on paper but not in practice.
Why This Matters Now
Three developments make minimum lovable governance the operating principle for this moment.
Regulatory architecture increasingly demands proportionality. The EU AI Act’s risk-tiered structure explicitly requires governance that scales with risk rather than applying uniform controls. This validates proportionate governance as the expected standard, not a shortcut.
Technology enables embedding. AI governance can now be delivered through the same platforms that deliver AI capability - automated risk assessment, continuous monitoring, contextual policy guidance woven into workflows rather than bolted on as separate processes.
Shadow AI forces the issue. When most employees use AI outside formal governance, organisations face a choice: build governance people will actually use, or accept that governance exists on paper while practice diverges completely.
The Strategic Choice
The question isn’t whether to govern AI - regulators, stakeholders, and operational risks have made that decision. The question is whether governance will be something the organisation does or something the organisation is.
Traditional governance treats oversight as a separate function. Minimum lovable governance treats governance as organisational capability - embedded in how work happens, continuous in operation, proportionate to risk. Not governance as constraint but governance as infrastructure, providing the foundation for confident AI deployment.
Build governance people route around, or build governance people love to use. The outcomes will differ accordingly.
Related Articles
Foundational
- Minimum Lovable Governance: The AI Operating Principle Boards Should Use - The complete exploration of what minimum lovable governance means and how to achieve it
Application
- AI Centre of Excellence: The Eighteen Functions - How the AI CoE operationalises minimum lovable governance
- Agentic AI: Strategic Implications for Boards - Governance approaches for autonomous AI systems
- Navigating the AI Regulatory Maze - Proportionate governance in regulatory context
Context
- AI Strategy: Coherent Actions - Minimum lovable governance within systematic AI transformation
- The Shadow AI Challenge - Why traditional governance fails and what to do instead




