# What is MCP? The Model Context Protocol, Explained for Developers *Published April 17, 2026 · 8 min read · [agenticdev.blog/guides/what-is-mcp](https://agenticdev.blog/guides/what-is-mcp)* ## TL;DR MCP (Model Context Protocol) is an open standard, published by Anthropic in November 2024, that lets AI models talk to external tools, files, and APIs through a single common interface. It replaces the per-vendor plugin architectures that preceded it. Every major AI client — Claude Desktop, Cursor, Zed, Continue — now speaks MCP, which means a server written once works everywhere. The protocol is small (JSON-RPC over stdio or HTTP+SSE), the SDKs are thin, and the ecosystem has grown from zero to hundreds of community servers in a year. --- If you've been anywhere near AI developer tools in the last twelve months, you've run into three letters: **MCP**. It's on every release note, every Cursor changelog, every Hacker News thread about Claude Desktop. The phrase "just add an MCP server" has replaced "just install an extension" as the default power-user move. Here's what MCP actually is, why it took over, and how to start using it. ## The short version **MCP — the Model Context Protocol — is an open standard that lets AI models talk to external tools, data, and APIs through a common interface.** Anthropic published the spec in November 2024. By mid-2025 it had been adopted by Cursor, Zed, Continue, and a long list of agent frameworks. Today it's the closest thing the ecosystem has to a universal plugin standard. If you've ever used a USB-C cable, the analogy everyone reaches for is accurate: MCP is a single pluggable interface that replaces a drawer full of vendor-specific adapters. You write a server once; it works in every MCP-compliant host. ## Why this matters (the problem MCP solves) Before MCP, every AI tool reinvented the same wheel. ChatGPT had Plugins. OpenAI had Functions. Each IDE extension had its own way to pipe a filesystem or a database into the model. Every integration was bespoke, and nothing was portable. The cost of that was real: if you built a great Notion integration for one chat app, you couldn't use it in any other. If you ran an internal tool at work, you had to write the same "talk to the model" shim three times — once for the Claude app, once for Cursor, once for whatever else your team used. MCP solves that by moving the contract out of the client. A server advertises what it can do — `list_files`, `query_postgres`, `create_jira_ticket` — and any MCP-aware host can discover and call it. The model doesn't care who wrote the server; the server doesn't care which model is calling it. ## How MCP actually works At the wire level, MCP is surprisingly small. It's **JSON-RPC 2.0 over stdio or HTTP+SSE**, with a handful of standard methods: - `initialize` — the host and server exchange capabilities. - `tools/list` — the server advertises its tools, with JSON Schema for each input. - `tools/call` — the host invokes a tool by name with arguments. - `resources/list` and `resources/read` — the server advertises and serves read-only context (files, database rows, API responses). - `prompts/list` and `prompts/get` — the server publishes canned prompts that the host can surface in a menu. That's almost the whole thing. The simplicity is the feature. You can implement an MCP server in under 100 lines of Python or TypeScript. Anthropic maintains SDKs in both. ## The three primitives: tools, resources, prompts Every MCP server exposes some mix of three primitives, and knowing the difference tells you what a given server can do. ### Tools Things the model can *do*. Side-effectful or query-based. A tool has a name, a description, a JSON Schema for its arguments, and a return type. `send_email`, `run_sql`, `git_commit` — all tools. ### Resources Things the model can *read*. A filesystem, a database table, a Notion page. Resources are addressed by URI and usually return text or JSON. They're the read-only half of the protocol. ### Prompts Reusable prompt templates the server author wants to ship alongside the tools. A GitHub MCP server might publish a "summarize recent issues" prompt the host surfaces in a slash-command menu. Lightweight and underused. ## How to add your first MCP server The easiest place to start is **Claude Desktop**. Install it, then edit `claude_desktop_config.json` (macOS: `~/Library/Application Support/Claude/`; Windows: `%APPDATA%\Claude\`). A minimal config that adds the official filesystem server looks like this: ```json { "mcpServers": { "filesystem": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-filesystem", "/Users/you/projects" ] } } } ``` Restart Claude Desktop. Open a new conversation, and the model now has `read_file`, `list_directory`, `search_files`, and a few other filesystem tools it can call. No plugin install, no account link — just a process spawning over stdio. In **Cursor**, the flow is similar but lives in the UI: Settings → MCP → "Add new MCP server." Paste the command and arguments, restart, and the tools show up in the agent's palette. ## Where MCP shines — and where it doesn't ### What it's good at - **Local-first integrations.** A filesystem server, a git server, a Postgres server — all ideal. The process runs next to the model, latency is zero, and you don't need to host anything. - **Internal tools.** Every company has a dozen scripts that wrap internal APIs. Wrapping them as an MCP server once makes them usable from every AI client you ship to employees. - **Specialized knowledge sources.** A server that searches your team's Notion or queries your logs becomes context the model can pull on demand, not a 50-KB blob you dump into every prompt. ### What it's not great at - **Browser-based clients.** The stdio transport assumes a local process. Running MCP from a web app means HTTP+SSE, and that path is less mature. - **Discovery UX.** There's no official registry. Installing a server still means finding its GitHub README and copying a config snippet. Third-party registries like Smithery and MCP.so are filling the gap, but the experience is pre-app-store. - **Authentication.** The protocol is young. How you ship API keys to a remote MCP server — and how the user authorizes it — is still being worked out in the spec. ## The ecosystem, one year in MCP adoption accelerated fast. By early 2026 the major hosts — Claude Desktop, Cursor, Zed, Continue, Cline, and several open-source agents — all speak MCP. The server ecosystem is in the hundreds: official servers from GitHub, Sentry, Linear, Notion, and Stripe; community servers for every database, every cloud, every "what if I wired this up to my LLM" side project. The politics are interesting. OpenAI initially sat out, pushing Functions and later the Responses API. In 2025 they quietly began supporting MCP in the Agents SDK. Google followed with A2A ("Agent2Agent"), a complementary protocol for agent-to-agent communication — different layer, same spirit. The industry now treats "can I point an MCP server at it" as table stakes. ## Should you build one? If you have an internal tool that half your team wants the AI to use, yes. The SDKs are thin, the spec is small, and you get distribution across every MCP-aware host for free. If you're a SaaS vendor, the calculus is harder. An MCP server gives developers a low-friction way to pipe your product into their AI workflow, but the support surface — auth, quotas, observability — is still being figured out publicly. Ship one, watch the Anthropic spec updates carefully, and be ready to iterate. ## Further reading - [modelcontextprotocol.io](https://modelcontextprotocol.io/) — official spec and SDKs - [modelcontextprotocol/servers](https://github.com/modelcontextprotocol/servers) — reference server implementations - [Agentic Dev's MCP coverage](https://agenticdev.blog/category/mcp-integrations) — every MCP-related story we've indexed MCP is one of those protocols whose upside isn't obvious from the spec. Then you install two or three servers and realize your AI tools just got a nervous system. Start with filesystem and git. See what you reach for next. --- *Part of the [Agentic Dev Guides](https://agenticdev.blog/guides) — evergreen explainers for developers who ship with AI. Subscribe to the daily newsletter at [agenticdev.blog](https://agenticdev.blog/).*