Model Context Protocol (MCP) Explained: The 2026 Standard for AI Tools
AI Engineering

Model Context Protocol (MCP) Explained: The 2026 Standard for AI Tools

The open protocol Anthropic introduced in late 2024 has quietly become the default AI integration standard. 78% of enterprise AI teams have MCP-backed agents in production by April 2026.

By Marcus Chia 2026-02-25 10 min read
Model Context Protocol (MCP) 2026 — Anthropic open standard for AI tools

The most consequential AI infrastructure development of 2025 was not a model release. It was a protocol. Anthropic introduced the Model Context Protocol (MCP) in November 2024 as an open standard for connecting AI agents to external tools and data sources. By April 2026, 78% of enterprise AI teams report having at least one MCP-backed agent in production, and the public MCP server registry has grown from 1,200 servers in Q1 2025 to over 9,400 in April 2026.

For teams building AI agents, MCP is no longer optional. This article is the practitioner's view of what it is, why it won, and how to use it.

1. The problem MCP solves

Before MCP, every AI agent integration was a custom build. To let an agent query your database, you wrote bespoke glue code. To let it post to Slack, more glue code. To let it read from Google Drive, more glue. Each integration was specific to a single agent framework, written in a specific language, with its own authentication, error handling, and security boundaries. Over time, organisations accumulated dozens of these custom integrations — fragile, hard to maintain, and impossible to share between teams.

MCP standardises the interface. An agent that speaks MCP can connect to any MCP-compliant server. An MCP server (a small program that exposes a set of tools or resources) can be reused across any MCP-compliant agent. The economics shifted overnight from N×M integrations to N+M.

2. The architecture in plain English

MCP architecture: Host ↔ Client ↔ Server

MCP architecture: Host ↔ Client ↔ Server 1HostThe AI app
Where the user interacts: Claude Desktop, Cursor, ChatGPT, custom agent. The host coordinates clients and presents tools to the model naturally.
2ClientConnection layer
A component inside the host that maintains the JSON-RPC connection (stdio for local, HTTP/SSE for remote) to ONE specific server.
3ServerCapability provider
Exposes Tools (functions the agent calls), Resources (data the agent reads), Prompts (reusable templates). One server, many hosts.

MCP defines three roles:

  • Host — the AI application the user interacts with (Claude Desktop, Cursor, ChatGPT, your custom agent).
  • Client — a component inside the host that maintains the connection to a specific MCP server.
  • Server — the program that exposes tools, resources, or prompts to the AI agent (a Slack server, a Postgres server, a GitHub server).

An MCP server can expose three things to the host:

  • Tools — functions the agent can call (e.g. send_slack_message, query_database, create_github_issue).
  • Resources — data the agent can read (files, database tables, document collections).
  • Prompts — reusable prompt templates the host can offer the user.

The protocol uses JSON-RPC over stdio (for local servers) or HTTP/SSE (for remote servers). For most users, this is invisible — the host handles the protocol details and presents the available tools naturally to the model.

3. Why MCP won

Three reasons.

It was good enough early. The November 2024 specification was usable from day one. Anthropic shipped reference servers for Postgres, Google Drive, Slack, GitHub, Git, and Puppeteer alongside the protocol. Developers could try it the same week it was announced.

It was open. Apache 2.0 licensed. No vendor lock-in. Anthropic doubled down by donating MCP to the Agentic AI Foundation under the Linux Foundation in 2025, with founding support from Block, OpenAI, Google, Microsoft, AWS, Cloudflare, and Bloomberg. The protocol is now governed by a neutral foundation, not a single vendor.

The major hosts adopted it fast. ChatGPT (via the Apps SDK and Connectors), Microsoft Copilot, Cursor, VS Code, Gemini API and Vertex AI Agent Builder all shipped MCP support during 2025. Microsoft published MCP servers for GitHub, Azure, Microsoft Teams, and the Microsoft 365 surface. By Q1 2026, MCP was the universal common denominator across AI hosts.

4. The 2026 ecosystem

What an MCP server can expose

What an MCP server can expose 1Tools
Callable functions: send_slack_message(), query_database(), create_github_issue(). The agent invokes them with arguments; the server executes and returns results.
2Resources
Readable data: files, database tables, document collections. The host can include them in the model's context window automatically.
3Prompts
Reusable templates the host offers users: "Summarise this PR", "Draft a changelog". Encoded once on the server, available across every host.

Public statistics tell the story:

  • 9,400+ public MCP servers as of April 2026, growing at +18% month-over-month through Q1 2026.
  • 67% of CTOs surveyed name MCP as their default agent-integration standard within 12 months.
  • 78% of enterprise AI teams have at least one MCP-backed agent in production.

The server ecosystem covers everything from developer tools (GitHub, GitLab, Postgres, Redis) through productivity (Slack, Notion, Google Drive, Microsoft 365) to enterprise systems (Salesforce, ServiceNow, SAP) and the long tail of internal-corporate servers that organisations build for their own systems.

5. How to use MCP in 2026

If you are using AI agents

Use a host that supports MCP — Claude Desktop, Cursor, VS Code, or any of the major frameworks (LangGraph, OpenAI Agents SDK, n8n's MCP node). Configure the MCP servers your agent needs. Most popular tools have a public server already; the registry at modelcontextprotocol.io is the starting point.

If you are building agents

Pick an SDK in your language of choice — TypeScript, Python, Go, and Rust SDKs are all maintained. Wrap your internal tools as MCP servers rather than as direct API calls inside the agent. The investment pays back the first time you reuse the server with a different host or a different agent framework.

If you are securing AI deployments

MCP servers are the new attack surface. They have credentials, they call internal systems, and they are exposed to the agent's reasoning. Apply the controls we cover in our agentic security article: allowlist tool surfaces, audit log every call, human-in-the-loop on consequential actions, hard spend caps, kill switches.

6. What is changing in 2026

Three trends to watch.

Code execution with MCP. Anthropic published research in 2025 showing that letting agents write and execute code that calls multiple MCP servers — rather than calling each server step by step — is dramatically more efficient. The pattern is becoming standard for complex multi-tool tasks.

Authentication standardisation. The early MCP authentication story was inconsistent — each server handled credentials differently. The OAuth 2.1 integration that landed in the late-2025 specification revisions is becoming the default for remote MCP servers. Expect this to settle further through 2026.

Enterprise governance tooling. As MCP adoption hits the 78% mark in enterprise, the governance layer is catching up. Expect tools that audit MCP server usage, enforce data classification at the server level, and integrate MCP with enterprise IAM.

For Malaysian teams building MCP-based agents with appropriate governance, our AI Agentic Automation programme and AI Agentic Security programme cover the full stack — building, securing, and governing MCP-integrated systems. HRDC SBL-KHAS claimable for eligible employers.

About the author

Marcus Chia →

12+ yrs Product Design · Vibe Coding Specialist · ASEAN-scale Products

Marcus has 12+ years in product design and front-end engineering, having shipped consumer and SaaS products used by millions across ASEAN. He specialises in vibe-coding workflows that turn Figma concepts into deployable apps using Claude Code, Antigravity, and Cursor — and teaches non-developers to ship polished, user-centric interfaces in days rather than sprints.

Frequently Asked Questions

Model Context Protocol is an open standard for connecting AI agents to external tools and data sources, introduced by Anthropic in November 2024 and now governed by the Linux Foundation's Agentic AI Foundation. It matters because it eliminates the N×M integration problem — any MCP-compliant agent can connect to any MCP-compliant server. By April 2026, 78 percent of enterprise AI teams have at least one MCP-backed agent in production.

Open. Apache 2.0 licensed. Anthropic donated MCP to the Agentic AI Foundation under the Linux Foundation in 2025, with founding support from Block, OpenAI, Google, Microsoft, AWS, Cloudflare, and Bloomberg. The protocol is now governed by a neutral foundation, not a single vendor — which is one of the main reasons it achieved cross-industry adoption so quickly.

By 2026: Claude (Anthropic), ChatGPT (via Apps SDK and Connectors), Microsoft Copilot, Cursor, VS Code, Gemini API and Vertex AI Agent Builder, plus most major agent frameworks (LangGraph, OpenAI Agents SDK, n8n MCP node). The protocol is now the universal common denominator across AI hosts.

MCP servers are the new attack surface. They have credentials to call internal systems, and they are exposed to the agent's reasoning — which makes prompt injection a direct path to abusing the server's permissions. The mitigations are the same as for any privileged integration: allowlist tool surfaces, audit log every call, human-in-the-loop on consequential actions, hard spend caps, and a tested kill switch. We cover this in detail in our agentic security article.

Yes, for internal systems that no public server covers. The investment is small — a basic MCP server is a few hundred lines of code in TypeScript or Python — and the payback is immediate the first time you reuse the same server with multiple agents or hosts. For external systems that already have public MCP servers (GitHub, Slack, Google Drive), use the public ones.

Want to apply this in your organisation?

AITraining2U runs HRDC-claimable corporate AI training for Malaysian organisations — from leadership awareness to hands-on builder workshops. Talk to us about a programme tailored to your team.