Cookie Consent

I use cookies to understand how my website is used. This data is collected and processed directly by me, not shared with any third parties, and helps us improve our services. See my privacy and cookie policies for more details.

Crossing the GenAI Divide: Solving The 95% Problem With The Complete AI Framework

Published in AI and Board | 12 minute read |    
Business executives in suits walking across a modern steel bridge spanning a dramatic canyon, moving from scattered floating platforms symbolising isolated pilot projects toward a futuristic interconnected city glowing in golden light, representing the journey from fragmented efforts to systematic business transformation. (Image generated by ChatGPT 5)

The MIT NANDA “State of AI in Business 2025” report, published in July 2025, examined 300+ AI implementations across nine major sectors to understand why enterprise AI adoption shows such stark variation in outcomes. Their findings are striking: whilst over 80% of organisations have explored or piloted general-purpose tools like ChatGPT, only 60% evaluated custom enterprise solutions, with just 20% reaching pilot stage and 5% achieving production implementation.

The study’s methodology — combining executive interviews, deployment analysis, and sector-by-sector assessment — provides a comprehensive mapping of what the researchers term the “GenAI Divide.” These findings align precisely with patterns I’ve directly observed: successful organisations don’t just implement better technology, they establish the coherent governance that enables systematic transformation whilst others cycle through disconnected pilots.

Most significantly for Boards, this research provides empirical validation that systematic governance frameworks deliver measurable transformation whilst opportunistic project approaches plateau in pilot mode. The specific challenges MIT identifies — change management coordination, investment allocation patterns, integration requirements, and capability development needs — align precisely with the governance solutions I’ve developed through my Complete AI Framework, integrating the AI Stages of Adoption (AISA), Five Pillars of capability, and the Well-Advised strategic priorities.

The MIT Research: Success Patterns Revealed

Rather than simply documenting adoption rates, the MIT study identifies the specific characteristics that distinguish organisations achieving systematic transformation from those cycling through pilots. The successful 5% share common characteristics that directly validate the governance approaches I’ve outlined in previous articles.

What Successful Organisations Do Differently

The research reveals five critical success enablers that distinguish transformative adopters from ongoing pilots:

Learning Systems: Successful organisations build AI systems that adapt, remember context, and improve over time. Unlike static tools that require constant prompting, these systems integrate feedback loops and develop institutional memory. This aligns with the Five Pillars approach to building institutional learning capabilities—systematic development across Governance & Accountability, Data & Analytics, Technology & Infrastructure, People & Culture, and Strategy & Innovation—that adapt and improve systematically. The manufacturing COO quoted in the study captured the contrast perfectly: whilst LinkedIn hype suggests everything has changed, only organisations with learning-capable systems see fundamental operational shifts.

Integration Excellence: Rather than deploying standalone tools, successful adopters focus on seamless workflow integration. They select solutions based on how well they embed into existing processes, not flashy demonstrations. This reflects the Technical Infrastructure pillar requirement for seamless workflow integration that I’ve outlined as essential for moving beyond isolated pilots. This explains why consumer tools like ChatGPT often outperform expensive enterprise solutions — they’re designed for flexibility and iteration rather than rigid functionality.

Governance Leadership: The research confirms what I’ve consistently advocated in my work on AI Centre of Excellence (AI CoE) and Board governance priorities: successful AI adoption requires Board-level oversight. Organisations advancing beyond pilots have established clear accountability frameworks and coordinated approaches across business functions. They’ve moved beyond departmental silos to systematic governance.

Strategic Investment: Whilst most organisations allocate approximately 70% of GenAI budgets to sales and marketing functions, successful adopters balance investment across value-creating areas. This aligns precisely with the portfolio approach I’ve outlined in my business case series, where organisations achieve better outcomes through deliberate distribution across quick wins, capability building, and transformative initiatives. They recognise that back-office automation often yields higher ROI than visible customer-facing applications, and they invest accordingly.

Workforce Transformation: The research reveals that successful organisations achieve selective workforce evolution rather than broad displacement, with measurable savings from reduced BPO spending and external agency use, particularly in back-office operations. This aligns with the Responsible Transformation pillar of Well-Advised—the framework that ensures balanced value creation across Innovation, Customer Value, Operational Excellence, Responsible Transformation, and Revenue dimensions—demonstrating how systematic AI adoption creates value through process improvement rather than workforce reduction.

The Multi-Speed Reality

The study confirms AI’s inherently parallel adoption pattern. Technology and Media sectors show clear structural disruption, whilst seven other industries continue developing through proven transformation pathways. This multi-speed reality validates the governance approaches I’ve previously outlined — different functions and even industries naturally progress at different rates, requiring sophisticated coordination.

Professional Services shows efficiency gains whilst enhancing client delivery. Healthcare implements documentation improvements alongside evolving clinical models. Financial Services automates backend processes whilst strengthening customer relationships. Each sector and function operates on its own complexity curve, creating the coordination opportunities that systematic frameworks address.

The Complete AI Framework: The Bridge to Success

The MIT research provides empirical validation for the integrated approach I’ve developed through the AI Stages of Adoption (AISA), Five Pillars, and Well-Advised. Each success enabler the study identifies maps directly to components of the Complete AI Framework, demonstrating why systematic governance succeeds where opportunistic pilots do not.

AISA: Managing the Investment Journey

AISA maps organisations’ AI maturity across five stages — from initial Experimenting through to enterprise-wide Scaling — using investment rather than time as the progression measure. The research highlights the critical pilot-to-production transition that challenges most organisations. AISA addresses this directly by recognising that AI adoption requires commitment across financial resources, people development, data preparation, process redesign, and organisational attention.

The multi-speed reality MIT documents — where marketing teams rapidly adopt content generation whilst finance approaches automation thoughtfully — reflects the natural variation in investment capacity across business functions. AISA recognises these different progression rates whilst maintaining strategic coherence, enabling the coordinated advancement that moves organisations beyond pilot experimentation.

This investment focus explains why different initiatives create varying value profiles and why traditional project evaluation methods often miss strategic value. This perspective enables portfolio decisions that balance quick wins with strategic transformation rather than applying uniform criteria across fundamentally different initiative types.

Five Pillars: Building Learning Capabilities

MIT’s emphasis on learning systems directly validates the Five Pillars approach to capability development across Governance & Accountability, Data & Analytics, Technology & Infrastructure, People & Culture, and Strategy & Innovation. The research shows that successful organisations don’t just deploy tools — they build institutional capabilities that improve over time.

The study’s finding that enterprise tools often struggle due to workflow integration challenges reflects precisely the capability gaps that Five Pillars assessment identifies. Rather than hoping individual projects will somehow create organisational capability, the framework ensures systematic development of the institutional learning systems that MIT shows enable advancement beyond experimentation.

Most importantly, Five Pillars recognises that different AISA stages require different capability maturity levels. Early-stage organisations need foundational capabilities in data quality and basic governance. Advanced adopters require sophisticated MLOps, ethical AI frameworks, and ecosystem integration. This staged approach prevents the capability mismatches that limit project progression.

Well-Advised: Balanced Value Creation

The research reveals significant investment patterns, with organisations directing approximately 70% of generative AI budgets toward sales and marketing whilst back-office automation often delivers superior ROI. Well-Advised directly addresses this through its five-priorities approach to value measurement: Innovation, Customer Value, Operational Excellence, Responsible Transformation, and Revenue.

This balanced perspective prevents the narrow focus on single outcomes that the MIT study shows characterises organisations remaining in pilot mode. Rather than chasing visible metrics like email response rates or social media engagement, the framework ensures comprehensive value creation across strategic dimensions. Initiatives that deliver across multiple Well-Advised priorities typically create more sustainable value than those focused on individual departmental benefits.

The emphasis on balancing financial and non-financial metrics becomes particularly important in multi-speed adoption environments, where different functions create value through different mechanisms and timescales. This comprehensive approach helps Boards evaluate AI initiatives based on strategic contribution rather than immediate operational metrics.

AI Centre of Excellence: Coordinated Governance

MIT’s research confirms that successful organisations have established coordinated governance approaches with clear executive oversight. The AI CoE structure provides exactly this systematic coordination, transforming fragmented pilot approaches into coherent transformation.

The study’s finding that external partnerships achieve twice the success rate of internal builds validates the AI CoE’s strategic procurement approach, where organisations focus on vendor orchestration and capability building rather than attempting to develop everything internally. This “buy rather than build” strategy enables organisations to leverage external expertise whilst building institutional capabilities simultaneously. Rather than relegating AI to IT departments or innovation labs, the AI CoE provides Board-level authority to coordinate across functions whilst maintaining strategic alignment.

Most critically, the AI CoE addresses the “shadow AI economy” the research documents, where 90% of employees use personal AI tools whilst only 40% of companies have official programmes. Through structured AI amnesty programmes and governed experimentation frameworks, the AI CoE channels creative energy within appropriate boundaries rather than suppressing innovation through restrictive policies.

Mapping to the Six Board Concerns

The patterns MIT identifies map precisely to the six concerns I’ve outlined that Boards must address in AI governance —Strategic Alignment, Ethical and Legal Responsibility, Financial and Operational Impact, Risk Management, Stakeholder Confidence, and Safeguarding Innovation. This alignment validates both the systematic nature of the challenges and the solutions required for successful AI transformation.

Strategic Alignment emerges clearly in the research’s documentation of investment patterns toward visible functions rather than strategic value creation. Organisations succeed when they align AI initiatives with comprehensive strategic objectives rather than pursuing technology for departmental efficiency alone.

Ethical and Legal Responsibility surfaces in the study’s emphasis on governance frameworks and the risks created by shadow AI usage. Successful organisations establish clear accountability structures and ensure AI decisions remain explainable and compliant.

Financial and Operational Impact is illustrated by the investment patterns the research documents. The study validates that systematic approaches to value measurement and capability development are essential for realising AI’s financial potential across multiple business functions.

Risk Management appears throughout the study’s findings on integration challenges and model performance considerations. Successful organisations build governance mechanisms that manage AI’s operational risks whilst enabling continued innovation.

Stakeholder Confidence emerges in the research’s documentation of user experience patterns and trust considerations with enterprise AI tools. The most successful implementations maintain transparent practices and clear communication about AI’s role in augmenting human capability.

Safeguarding Innovation is highlighted by the pilot-to-production transition that enables most organisations to scale successful experiments. Systematic frameworks enable innovation within appropriate governance boundaries rather than constraining creativity through bureaucratic oversight.

Practical Board Actions: Advancing Beyond Pilots

The research provides clear guidance for Boards ready to move from pilot experimentation to systematic transformation. The actions successful organisations take fall into three time horizons, each building capabilities that enable the next phase.

Immediate Actions (30 Days)

Begin with a comprehensive generative AI audit using AISA — which map organisations across five stages from Experimenting to Scaling based on investment progression — to understand the current position across business functions. This audit should identify which AI systems qualify under different criteria, map capability gaps across the Five Pillars, and assess resource requirements for systematic adoption.

Implement an AI Amnesty programme to surface shadow AI usage and transform it into governed experimentation. The MIT research shows that employees already use AI tools extensively — the opportunity lies in channelling this energy systematically rather than allowing fragmented adoption that creates governance challenges.

Map current initiatives to Well-Advised pillars to identify investment patterns and value opportunities. Most organisations discover they’re over-investing in visible functions whilst neglecting higher-ROI opportunities in operations, compliance, or analytical capabilities.

Next Quarter Actions

Establish the AI CoE with clear Board-level reporting relationships. The research confirms that successful organisations have coordinated governance with executive oversight. The AI CoE provides the institutional mechanism for managing multi-speed adoption whilst maintaining strategic alignment.

Implement integrated evaluation processes using the Five Pillars framework to assess initiatives based on capability development rather than isolated business cases. This systematic approach enables the coordinated advancement that moves organisations beyond pilot experimentation.

Launch a balanced portfolio that addresses the investment patterns MIT documents. Rather than allowing departmental preferences to drive AI spending, create deliberate balance across value-creating functions, with particular attention to back-office automation opportunities that often deliver superior ROI.

Next Year Actions

Scale successful initiatives using the systematic governance structures established through the Complete AI Framework. The research shows that organisations advancing beyond pilots build on early successes through coordinated expansion rather than launching additional isolated experiments.

Develop governance frameworks as competitive advantages. As the study demonstrates, systematic approaches to AI adoption become increasingly valuable assets, creating sustainable differentiation for organisations that implement them effectively.

Build ecosystem partnerships that extend capabilities beyond organisational boundaries. The research confirms that successful organisations leverage external relationships systematically, using vendor partnerships to accelerate capability development whilst maintaining strategic control.

The Competitive Advantage: Why Systematic Approaches Work

The MIT research illuminates why systematic governance creates competitive advantages that opportunistic pilots cannot match. Successful organisations don’t just implement better technology — they build institutional capabilities that compound over time.

The study’s finding that organisations advancing beyond pilots “buy rather than build, empower line managers rather than central labs, and select tools that integrate deeply while adapting over time” reflects sophisticated strategic choices about capability development. These organisations recognise that sustainable AI transformation requires coordinated approaches that balance innovation velocity with governance requirements - remember, “minimum lovable governance”.

The research also reveals why timing matters for competitive positioning. As systematic adopters establish market positions and build institutional capabilities, experimental approaches face increasing disadvantages. The governance premium investors and customers place on transparent AI becomes more pronounced as regulatory frameworks mature and stakeholder expectations evolve.

Most importantly, the study demonstrates that advancing beyond the pilot stage creates self-reinforcing advantages. Organisations with systematic governance attract better partnerships, retain talent more effectively, and build trust with stakeholders in ways that accelerate subsequent AI initiatives. The 5% succeeding today are establishing dominant positions for tomorrow’s AI economy.

Future Evolution: Preparing for Agentic AI

The research also speculates about emerging agentic AI systems that could embed persistent memory and iterative learning capabilities, potentially addressing the learning gap that defines the GenAI Divide. These speculative systems would maintain persistent memory, learn from interactions, and autonomously orchestrate complex workflows — representing a significant evolution from current AI capabilities.

While these agentic systems are still gaining traction in the enterprise, their emergence suggests that governance frameworks must prepare for increasingly autonomous AI capabilities. The Complete AI Framework’s emphasis on systematic capability building and coordinated governance provides the foundation needed to govern these more sophisticated AI systems effectively should they materialise as the research suggests.

Conclusion: From Opportunity to Action

The MIT research provides definitive evidence that systematic AI governance works whilst opportunistic pilots plateau. The patterns keeping 95% of organisations in experimentation mode — change management approaches, governance development needs, integration requirements, and investment allocation decisions — are entirely addressable through proven frameworks.

The Complete AI Framework provides the systematic approach the research validates. By integrating AISA stages, Five Pillars capabilities, and Well-Advised value creation, organisations can move decisively from pilot experimentation to systematic transformation.

The opportunity facing Boards is clear — whilst most organisations remain in experimental mode, systematic governance provides the pathway to measurable AI transformation. The successful 5% demonstrate what’s possible when coordinated frameworks replace opportunistic projects.

The GenAI Divide represents one of the most significant opportunities in modern business. Through the AI CoE, organisations can assess their current landscape, harness existing AI usage, and implement the coordinated approach that transforms industry-wide patterns into competitive advantage.

The pathway is proven, the frameworks are established, and the opportunity is immediate.

Let's Continue the Conversation

Thank you for exploring my insights on bridging the GenAI Divide with the Complete AI Framework. If you're interested in discussing how your organisation can address the challenges of systematic AI adoption or share experiences on governance and transformation, I’d be glad to connect.




About the Author

Mario Thomas is a transformational business leader with nearly three decades of experience driving operational excellence and revenue growth across global enterprises. As Head of Global Training and Press Spokesperson at Amazon Web Services (AWS), he leads worldwide enablement delivery and operations for one of technology's largest sales forces during a pivotal era of AI innovation. A Chartered Director and Fellow of the Institute of Directors, and an alumnus of the London School of Economics, Mario partners with Boards and C-suite leaders to deliver measurable business outcomes through strategic transformation. His frameworks and methodologies have generated over two-billion dollars in enterprise value through the effective adoption of AI, data, and cloud technologies.