// guide · explainer

What is MCP? The Model Context Protocol, explained.

Published April 17, 2026 · 8 min read · Part of the Agentic Dev Guides series.

TL;DR

MCP (Model Context Protocol) is an open standard — published by Anthropic in November 2024 — that lets AI models talk to external tools, files, and APIs through a single common interface. It replaces the per-vendor plugin architectures that preceded it. Every major AI client (Claude Desktop, Cursor, Zed, Continue) speaks MCP, so a server written once works everywhere. The protocol is small (JSON-RPC over stdio or HTTP+SSE), SDKs are thin, and the ecosystem grew from zero to hundreds of community servers in a year.

If you've been anywhere near AI developer tools in the last twelve months, you've run into three letters: MCP. It's on every release note, every Cursor changelog, every Hacker News thread about Claude Desktop. The phrase "just add an MCP server" has replaced "just install an extension" as the default power-user move.

Here's what MCP actually is, why it took over, and how to start using it.

The short version

MCP — the Model Context Protocol — is an open standard that lets AI models talk to external tools, data, and APIs through a common interface. Anthropic published the spec in November 2024. By mid-2025 it had been adopted by Cursor, Zed, Continue, and a long list of agent frameworks. Today it's the closest thing the ecosystem has to a universal plugin standard.

If you've ever used a USB-C cable, the analogy everyone reaches for is accurate: MCP is a single pluggable interface that replaces a drawer full of vendor-specific adapters. You write a server once; it works in every MCP-compliant host.

Why this matters (the problem MCP solves)

Before MCP, every AI tool reinvented the same wheel. ChatGPT had Plugins. OpenAI had Functions. Each IDE extension had its own way to pipe a filesystem or a database into the model. Every integration was bespoke, and nothing was portable.

The cost of that was real: if you built a great Notion integration for one chat app, you couldn't use it in any other. If you ran an internal tool at work, you had to write the same "talk to the model" shim three times — once for the Claude app, once for Cursor, once for whatever else your team used.

MCP solves that by moving the contract out of the client. A server advertises what it can do — list_files, query_postgres, create_jira_ticket — and any MCP-aware host can discover and call it. The model doesn't care who wrote the server; the server doesn't care which model is calling it.

How MCP actually works

At the wire level, MCP is surprisingly small. It's JSON-RPC 2.0 over stdio or HTTP+SSE, with a handful of standard methods:

That's almost the whole thing. The simplicity is the feature. You can implement an MCP server in under 100 lines of Python or TypeScript. Anthropic maintains SDKs in both.

The three primitives: tools, resources, prompts

Every MCP server exposes some mix of three primitives, and knowing the difference tells you what a given server can do.

Tools

Things the model can do. Side-effectful or query-based. A tool has a name, a description, a JSON Schema for its arguments, and a return type. send_email, run_sql, git_commit — all tools.

Resources

Things the model can read. A filesystem, a database table, a Notion page. Resources are addressed by URI and usually return text or JSON. They're the read-only half of the protocol.

Prompts

Reusable prompt templates the server author wants to ship alongside the tools. A GitHub MCP server might publish a "summarize recent issues" prompt the host surfaces in a slash-command menu. Lightweight and underused.

How to add your first MCP server

The easiest place to start is Claude Desktop. Install it, then edit claude_desktop_config.json (macOS: ~/Library/Application Support/Claude/; Windows: %APPDATA%\Claude\).

A minimal config that adds the official filesystem server looks like this:

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "/Users/you/projects"
      ]
    }
  }
}

Restart Claude Desktop. Open a new conversation, and the model now has read_file, list_directory, search_files, and a few other filesystem tools it can call. No plugin install, no account link — just a process spawning over stdio.

In Cursor, the flow is similar but lives in the UI: Settings → MCP → "Add new MCP server." Paste the command and arguments, restart, and the tools show up in the agent's palette.

Where MCP shines — and where it doesn't

What it's good at

What it's not great at

The ecosystem, one year in

MCP adoption accelerated fast. By early 2026 the major hosts — Claude Desktop, Cursor, Zed, Continue, Cline, and several open-source agents — all speak MCP. The server ecosystem is in the hundreds: official servers from GitHub, Sentry, Linear, Notion, and Stripe; community servers for every database, every cloud, every "what if I wired this up to my LLM" side project.

The politics are interesting. OpenAI initially sat out, pushing Functions and later the Responses API. In 2025 they quietly began supporting MCP in the Agents SDK. Google followed with A2A ("Agent2Agent"), a complementary protocol for agent-to-agent communication — different layer, same spirit. The industry now treats "can I point an MCP server at it" as table stakes.

Should you build one?

If you have an internal tool that half your team wants the AI to use, yes. The SDKs are thin, the spec is small, and you get distribution across every MCP-aware host for free.

If you're a SaaS vendor, the calculus is harder. An MCP server gives developers a low-friction way to pipe your product into their AI workflow, but the support surface — auth, quotas, observability — is still being figured out publicly. Ship one, watch the Anthropic spec updates carefully, and be ready to iterate.

Further reading

MCP is one of those protocols whose upside isn't obvious from the spec. Then you install two or three servers and realize your AI tools just got a nervous system. Start with filesystem and git. See what you reach for next.

One AI-dev-tools story a day.

Free. 6 AM ET. No hype, just signal.