Cookie Consent

I use cookies to understand how my website is used. This data is collected and processed directly by me, not shared with any third parties, and helps us improve our services. See my privacy and cookie policies for more details.

MCP Explained: The Agent Infrastructure Standard Boards Need to Understand

Llantwit Major | Published in Data | 11 minute read |    
A sleek modern MCP hub on a dark walnut executive desk, with cables of different vintages connecting to surrounding legacy hardware including a CRT monitor, blue LED glowing on the hub. (Image generated by ChatGPT 5.2)

According to Deloitte’s 2026 State of AI in the Enterprise report, 74% of organisations plan to deploy autonomous agents within two years, and yet only 21% have mature governance in place for those agents. The gap between intent and readiness is striking — but there is a more fundamental challenge beneath it.

An AI agent is, at its most fundamental, an orchestration system — one that plans, retrieves information, coordinates tools and models, and executes tasks autonomously within a loop it drives itself. But its outputs are only as good as the information it can access. An agent operating solely on publicly available information — the internet, industry research, general knowledge — has no visibility into your customer relationships, your operational data, your proprietary processes, or the institutional knowledge your organisation has built over decades. The orchestration may be sophisticated. The raw material is generic. And generic inputs produce generic outputs, regardless of how capable the underlying system is.

This is not a flaw in the technology. It is a connectivity problem. Model Context Protocol (MCP) is the infrastructure standard that solves it. In this article I explain what MCP is, why it matters strategically, and the governance questions Boards should be asking before their technology teams answer them by default.

What MCP actually is

The most useful way to understand Model Context Protocol is by analogy with a problem that has already been solved.

Before USB became the universal interface for peripheral devices, every connection was bespoke. Printers required one cable and one driver. Cameras required another. Keyboards, external storage, audio interfaces — each demanded its own proprietary integration. USB replaced this complexity with a single standardised interface: one connection type that any device could use to communicate with any computer, without custom engineering for every combination.

MCP does the same thing for AI agents and enterprise systems. Before MCP, connecting an agent to a CRM required one bespoke integration, built and maintained by your technology team. Connecting it to a document store required a separate integration. Connecting it to an ERP required a third. Each connection was an engineering project in its own right, and the cost of maintaining those integrations compounded as systems were updated and agents evolved. MCP replaces this with a standardised interface — a defined way for agents to discover what systems are available and interact with them, without the organisation rebuilding those connections every time a component changes.

In practice, an MCP server sits between the agent and the underlying system, handling the translation. The agent never communicates directly with Salesforce, the ERP, or the document management system — it communicates with the MCP server, which acts as a defined intermediary. This architecture is strategically valuable: it creates a clear, auditable control point between AI behaviour and core systems.

Through MCP, agents can discover what capabilities are available — what data exists, what actions are possible — and then interact accordingly: reading information, creating records, updating data, or triggering workflows, depending on what the MCP server is configured to permit. That configuration decision belongs to the organisation, not to the protocol. An organisation can begin with a read-only deployment, giving agents visibility without write access — and this is a sensible entry position, significantly constraining the risk profile compared to full read-write access. But read-only is not risk-free. An agent that can see data can synthesise it into outputs — summaries, analyses, briefings — and those outputs may surface information to users who would not have had direct access to the underlying data. The agent touches nothing; but what it sees, it can encode into what it says. Governing read-only MCP access means governing not just what the agent retrieves, but what it produces and for whom. That distinction matters from the first deployment, not just when write access is introduced.

The concept of abstracted access to underlying systems is not new. Enterprise technology has attempted versions of this problem since the 1990s. What makes MCP viable where previous attempts faltered is scope discipline: MCP solves a deliberately narrower problem, built on existing web infrastructure, with a specification compact enough to implement without specialist expertise. That combination has produced adoption velocity that earlier standards never achieved, and it is what has caused the enterprise vendor community to converge on it as quickly as it has.

MCP is already the de facto standard

MCP was open-sourced in late 2024. By early 2026, most major enterprise software vendors — including Salesforce, SAP, Google, and Microsoft — have either launched or publicly committed to MCP servers for their platforms. For context, enterprise technology standards typically take years of committee deliberation to establish and longer still to achieve meaningful vendor adoption. MCP has done this in months. That convergence speed matters: when the primary providers of enterprise software are all building to the same interface this quickly, the standards question is effectively settled. The decision organisations now face is not whether to engage with MCP — it is simply when to start.

The timing of that decision has consequences that compound. Google Cloud’s 2025 ROI of AI research — drawn from 3,466 senior leaders across 24 countries — finds that 74% of executives with deployed agents report achieving ROI within the first year. Among those with agents most deeply embedded, a cohort representing 13% of executives surveyed, ROI is consistently higher across every measured dimension — customer experience, marketing effectiveness, security operations, and software development. The mechanism is not complicated: agents that learn from an organisation’s specific data and processes become more valuable over time as institutional knowledge accrues. That accumulated context — the pattern recognition, the embedded workflows, the outputs refined through repeated use — is not replicable by a competitor arriving later with a generic agent. The organisations connecting their agents to their proprietary data now are building an advantage that widens with every month that passes.

The systems agents cannot see

Not every system in an organisation can be connected to agents with equal ease, and understanding that reality is precisely what separates strategic decision-making from wishful thinking.

The scale of that challenge is significant. According to the Gartner Hype Cycle for AI 2025, 57% of organisations say their data is not AI-ready — and this is not primarily a technology shortfall. It is a connectivity and structural one. Organisations assessing their MCP readiness will find their systems fall into clearly different categories. Modern SaaS platforms — the CRMs, productivity suites, and service management tools from leading enterprise vendors — increasingly have vendor-built and vendor-maintained MCP servers. The deployment effort for these is relatively contained; much of the integration work has already been done. For more recent internal systems with well-structured APIs, building an MCP server is a real engineering investment, but a tractable one — the kind of project a competent technology team can scope and deliver. The challenge lies in the third category: the legacy systems that have run core operations for twenty or thirty years, the platforms built on proprietary data models that predate modern web infrastructure. For these, there may be no realistic route to MCP connectivity without a broader transformation programme first.

This matters strategically because those legacy systems frequently contain the most operationally critical data. An agent that cannot reach that data does not simply have incomplete information — it may be advising on procurement, customer strategy, or operational performance based on a fundamentally partial picture of the business. The gap between what the agent can see and what the organisation actually knows is the intelligence gap, and Boards should want to understand its dimensions before approving further agent investment.

For vendor-built MCP servers, two further questions deserve explicit Board-level attention. The first is openness: vendors decide which data to expose through their MCP servers, at what granularity, and under what terms. Those decisions are not neutral — a vendor may expose what aligns with their commercial interests rather than what serves the organisation’s. The second is security: the MCP server represents a new interface to core systems, and its security model needs to be deliberate. Authentication, permissions, and audit trails are governance questions as much as technical ones, and they belong in the procurement conversation, not just the implementation one.

From data access to data lifecycle

Most governance conversations about AI data access ask: what can the agent see? This is the right question, but not the only one. The more consequential question — and the one most organisations currently cannot answer — is what will the agent do with the data once it has accessed it?

When an agent retrieves information and synthesises it into an output — a summary, a market analysis, a strategic briefing for the leadership team — that output is a derived work. It encodes the intelligence extracted from source data without carrying the source data’s access controls. A briefing prepared by an agent with broad data access may contain information that should not be visible to every recipient of that briefing. The output is as sensitive as the source. Nothing automatically marks it as such. This is the inference channel problem.

This becomes more serious when agents have persistent memory — the capability that makes agents genuinely useful over time. If an agent stores its analysis and accumulated insight in a shared memory layer, and that same agent serves multiple users with different permission levels, the retrieval mechanisms have no way of knowing what they are surfacing or where it originated. A user without access to certain data can, in this model, receive intelligence derived from that data — without either party intending it. The raw data never moved. The intelligence did.

For most enterprise agent deployments today, the honest answer to the question “what will the agent do with the data once it has accessed it?” is: we do not fully know. Access controls govern the front door. What the agent synthesises, stores, and surfaces to others is largely ungoverned. This does not argue against deploying agents. It argues that the governance question needs to extend from data access to data lifecycle — and that organisations should be building that capability now, not after the first incident.

The financial consequences of not taking this seriously are already visible. IBM’s 2025 Cost of Data Breach Report, drawn from 600 organisations and conducted by the Ponemon Institute, found that 97% of organisations that suffered an AI-related security incident lacked proper AI access controls. Shadow AI breaches — where agents operate outside sanctioned governance frameworks — cost organisations an average of $670,000 more than standard incidents, with 65% resulting in the compromise of personal data and 40% in intellectual property exposure. 63% of breached organisations had no AI governance policy in place. The inference channel is no longer a theoretical governance concern — its financial consequences are already being reported.

Where to start

The practical first move for any Board seeking to understand its MCP position is clear. Ask the technology team to conduct a rapid visibility audit: identify the ten systems that drive 80% of the organisation’s critical decisions, and establish which of those are MCP-connected, which are MCP-buildable with reasonable engineering investment, and which are currently invisible to agents with no near-term route to connectivity. That single exercise surfaces the intelligence gap in concrete terms, creates the basis for a prioritisation conversation, and gives the governance questions that follow something specific to act on — rather than leaving them as abstractions that never quite connect to the organisation’s actual situation.

The questions Boards should be asking

There are four questions that cut through to the architecture, the permission model, and the full lifecycle of data that agents touch. None of them are IT questions. All of them are governance questions that happen to have a technology dimension.

The first concerns strategic visibility. Which of the organisation’s most critical business decisions are currently being made by agents that cannot see its core operational systems? If agents are advising on procurement, customer strategy, or operational performance without access to the systems that hold the relevant data, the quality of that advice is structurally limited — regardless of how capable the underlying model is.

The second concerns the permission model. Are the organisation’s agents authenticating with shared service accounts, or are they acting on behalf of specific users with that user’s own permissions? The answer determines whether existing access controls are being honoured or bypassed by default. This is a question worth putting directly to the technology team, and the answer is rarely as reassuring as one might hope.

The third concerns the vendor relationship. For vendor-built MCP servers, what data has the vendor chosen to expose, and who made that decision? Are those choices subject to ongoing audit, and does the organisation have visibility into the security model of that interface? These are procurement and relationship questions as much as technical ones, and they belong in the governance conversation rather than in the IT department alone.

The fourth concerns data lifecycle. What is the full journey of data that the organisation’s agents access — from retrieval, through synthesis, to output and storage? Who can see the outputs? What does the agent retain between sessions, and who does that retained intelligence serve? The answer to this question is the measure of whether the organisation’s AI governance is genuinely mature or merely nominal.

These are not questions that require deep technical expertise to ask. They do require the Board to understand that MCP represents an infrastructure decision — one that is already being made, often by default, in technology teams across the organisation. The Board that asks these questions now is better positioned than the Board that encounters the answers after the fact.

MCP is not a technology decision dressed up as a governance question. It is a governance question that happens to have a technology dimension — and that distinction matters for how Boards engage with it. The organisations that treat MCP as an IT implementation project will find the decision has already been made for them, incrementally, by the teams connecting agents to systems without a coherent framework for what those agents can do with what they find. The organisations that treat it as an infrastructure and governance question will make that decision deliberately, with visibility into the intelligence gap, the permission model, and the full lifecycle of data their agents touch.

The standard is settled. The tools are available. The governance frameworks are still being written — and that window, in which an organisation can shape its own approach rather than retrofit controls onto adoption already underway, does not stay open indefinitely. Getting started is the right move. Getting started with clarity about what the agents can see, what they can do, and what happens to the intelligence they generate is the better one.

Let's Continue the Conversation

Thank you for reading about Model Context Protocol and the connectivity gap that is keeping so many AI agents from delivering real business value. I'd welcome hearing about your Board's experience with this — whether you're discovering that your agents are working from a more partial picture of the business than you realised, wrestling with how to prioritise MCP connectivity across a mixed estate of modern and legacy systems, or finding that the governance conversation in your organisation has focused on data access without yet reaching the harder question of what your agents do with that data once they have it.