The Problem MCP Solves
Every AI assistant β Claude, Copilot, Cursor, you name it β runs into the same wall eventually: the model knows a lot, but it can't do anything outside its context window. It can't read your database, query your internal API, or check what time it is unless someone explicitly builds that bridge.
Before MCP, every team building an AI-powered tool had to wire up their own integrations from scratch. Want Claude to search your Notion workspace? Write a custom integration. Want it to query your Postgres database? Write another one. Each integration was one-off, fragile, and non-reusable.
Model Context Protocol (MCP) is Anthropic's answer: an open standard that defines a single, universal interface between AI hosts and the outside world. Build an MCP server once, and any MCP-compatible AI client β Claude Desktop, Claude Code, Cursor, your own app β can use it immediately.
Think of it like HTTP for AI tools. HTTP didn't invent the web, but it gave every browser and server a shared language. MCP does the same for AI and the services it needs to interact with.
Architecture in Three Parts
MCP has three roles:
Host β The application the user interacts with. Claude Desktop, Claude Code, Cursor, or your own custom app. The host contains an MCP client that manages connections.
Client β Lives inside the host. Maintains a 1:1 connection with an MCP server, handles the protocol negotiation, and routes requests from the model to the server and responses back.
Server β An external process (local or remote) that exposes capabilities to the model. This is the part you build. A server can be as simple as a single Python file.
ββββββββββββββββββββββββββββββββββββ
β Host β
β ββββββββββββ βββββββββββββββ β
β β LLM βββββ MCP Client β β
β ββββββββββββ ββββββββ¬βββββββ β
βββββββββββββββββββββββββββΌβββββββββ
β MCP Protocol
βββββββββββββΌβββββββββββ
β MCP Server β
β (your code / tool) β
ββββββββββββββββββββββββ
What Servers Can Expose
An MCP server can expose three types of capabilities:
Tools
Functions the model can call β the most common and useful type. The model decides when to call a tool based on the conversation, receives the result, and incorporates it into its response.
@mcp.tool()
def query_database(sql: str) -> str:
"""Run a read-only SQL query and return results as JSON"""
...
Resources
Structured data the model can read β files, database contents, API responses. Resources are identified by URIs and are read on demand.
@mcp.resource("logs://app/recent")
def get_recent_logs() -> str:
"""Returns the last 100 lines of the application log"""
...
Prompts
Reusable prompt templates that the host can surface to users. Useful for standardising common workflows.
@mcp.prompt()
def code_review(code: str) -> str:
return f"Review this code for bugs, security issues, and style:\n\n{code}"
Transport: How Clients and Servers Talk
MCP supports two transport mechanisms:
stdio β The server runs as a subprocess of the host. Communication happens over stdin/stdout. This is the standard approach for local tools β zero network setup, simple, fast.
HTTP + SSE β The server runs as an HTTP service. The client sends requests over HTTP and receives streaming responses via Server-Sent Events. Used for remote servers or when you need a persistent network service.
For local development and personal tools, stdio is almost always the right choice.
Building Your First MCP Server in Python
Install the SDK:
pip install mcp
Here is a complete, working MCP server that exposes a few useful tools β checking the current time, reading a file, and doing a basic web search via DuckDuckGo's API:
# server.py
from datetime import datetime
from pathlib import Path
import httpx
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("dev-tools")
@mcp.tool()
def current_datetime() -> str:
"""Returns the current date and time."""
return datetime.now().strftime("%Y-%m-%d %H:%M:%S")
@mcp.tool()
def read_file(path: str) -> str:
"""
Read a file from the filesystem and return its contents.
Args:
path: Absolute or relative path to the file.
"""
try:
return Path(path).read_text(encoding="utf-8")
except FileNotFoundError:
return f"Error: file not found: {path}"
except Exception as e:
return f"Error reading file: {e}"
@mcp.tool()
def web_search(query: str, max_results: int = 5) -> str:
"""
Search the web using DuckDuckGo and return a summary of results.
Args:
query: The search query.
max_results: Number of results to return (default 5).
"""
url = "https://api.duckduckgo.com/"
params = {"q": query, "format": "json", "no_redirect": 1}
response = httpx.get(url, params=params, timeout=10)
data = response.json()
results = []
for topic in data.get("RelatedTopics", [])[:max_results]:
if "Text" in topic and "FirstURL" in topic:
results.append(f"- {topic['Text']}\n {topic['FirstURL']}")
if not results:
return f"No results found for: {query}"
return "\n".join(results)
@mcp.resource("env://system")
def system_info() -> str:
"""Returns basic system information."""
import platform
return (
f"OS: {platform.system()} {platform.release()}\n"
f"Python: {platform.python_version()}\n"
f"Machine: {platform.machine()}"
)
if __name__ == "__main__":
mcp.run()
Run it directly to verify it starts without errors:
python server.py
Connecting to Claude Desktop
Claude Desktop reads MCP server configuration from a JSON file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Add your server:
{
"mcpServers": {
"dev-tools": {
"command": "python",
"args": ["/absolute/path/to/server.py"]
}
}
}
Restart Claude Desktop. A small plug icon appears in the chat interface confirming the server is connected. You can now ask Claude to use any of your tools directly in conversation:
"What time is it?" "Read the file at /home/user/notes.txt and summarise it." "Search the web for the latest news on Rust 2025."
Claude decides autonomously when to invoke each tool. You don't need to ask explicitly β it will reach for the right tool when the context calls for it.
Connecting to Claude Code
In Claude Code (this CLI), MCP servers are configured per-project or globally. Add a server to the project configuration:
claude mcp add dev-tools python /absolute/path/to/server.py
Or edit .claude/settings.json directly:
{
"mcpServers": {
"dev-tools": {
"command": "python",
"args": ["/absolute/path/to/server.py"]
}
}
}
A More Realistic Example: Database Tools
Here is a more production-shaped example β an MCP server that exposes read-only access to a SQLite database:
# db_server.py
import sqlite3
import json
from mcp.server.fastmcp import FastMCP
DB_PATH = "app.db"
mcp = FastMCP("database")
@mcp.tool()
def list_tables() -> str:
"""List all tables in the database."""
with sqlite3.connect(DB_PATH) as conn:
tables = conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' ORDER BY name"
).fetchall()
return json.dumps([t[0] for t in tables])
@mcp.tool()
def describe_table(table: str) -> str:
"""
Return the schema for a table.
Args:
table: Table name.
"""
with sqlite3.connect(DB_PATH) as conn:
cols = conn.execute(f"PRAGMA table_info({table})").fetchall()
return json.dumps([
{"name": c[1], "type": c[2], "nullable": not c[3], "pk": bool(c[5])}
for c in cols
], indent=2)
@mcp.tool()
def query(sql: str) -> str:
"""
Run a read-only SQL SELECT query.
Args:
sql: A SELECT statement. INSERT/UPDATE/DELETE are rejected.
"""
sql_stripped = sql.strip().upper()
if not sql_stripped.startswith("SELECT"):
return "Error: only SELECT queries are permitted."
with sqlite3.connect(DB_PATH) as conn:
conn.row_factory = sqlite3.Row
rows = conn.execute(sql).fetchall()
return json.dumps([dict(r) for r in rows], indent=2, default=str)
if __name__ == "__main__":
mcp.run()
With this connected to Claude, you can have a natural-language conversation about your database:
"What tables do we have?" "Show me the last 10 orders with a value over Β£500." "How many users signed up each month this year?"
Claude translates your intent into SQL, calls the query tool, and presents the results β without ever needing to write the query yourself.
Security Considerations
MCP servers run with the same permissions as the process that starts them. A few rules of thumb:
- Only expose what is needed. Don't create a general
run_shell_commandtool unless you have a very strong reason. - Validate inputs. Treat tool arguments like any other external input β they come from an LLM that can be prompted by users.
- Enforce read-only where appropriate. The database example above rejects anything other than
SELECT. - Be careful with file access. If you build a
read_filetool, consider restricting it to a specific directory rather than accepting arbitrary paths.
The Bigger Picture
MCP is still young but the adoption curve is steep. Within months of launch, hundreds of servers appeared in the wild β for GitHub, Slack, Postgres, Kubernetes, web browsers, vector databases, and more. Because the protocol is open, any tool you build works across all compatible hosts β today Claude Desktop and Claude Code, soon many others.
The pattern it enables is powerful: instead of fine-tuning a model to know about your specific systems, you give the model the ability to look things up at runtime. The model stays general; the tools make it specific to your context. That separation is cleaner, more maintainable, and easier to audit than trying to bake domain knowledge into weights.
Resources
- MCP official documentation
- MCP Python SDK on GitHub
- MCP server registry β reference implementations for GitHub, Slack, Postgres, filesystem, and more
- Claude Desktop MCP guide