The Model Context Protocol (MCP) enables AI agents to access external tools and data sources so that they can more effectively take action.
After reading this article you will be able to:
Copy article link
The Model Context Protocol is a standard way to make information available to large language models (LLMs). Somewhat similar to the way an application programming interface (API) works, MCP offers a documented, standardized way for a computer program to integrate services from an external source. It supports agentic AI: intelligent programs that can autonomously pursue goals and take action.
MCP, essentially, allows AI programs to exceed their training. It enables them to incorporate new sources of information into their decision-making and content generation, and helps them connect to external tools.
Imagine an assistant who needs to make reservations for his boss at a restaurant. The assistant will call the restaurant's phone number, ask what times they have available, and request a table. MCP is a way to provide a "phone number" to AI agents so that they can get the information they need in order to carry out tasks.
MCP was developed by AI company Anthropic and later open-sourced. Since becoming open source in late 2024, MCP has rapidly become an industry standard, enabling more widespread use of AI agents.
AI agents are AI programs built on top of LLMs. They use LLM information-processing capabilities to obtain data, make decisions, and take actions on behalf of human users.
MCP is one way for AI agents to find the information they need and to take actions. It helps connect AI agents to the "outside world," so to speak — the world beyond the LLM's training data. (Other methods include API integrations and headless browsing.)
MCP is a protocol — an agreed-upon set of steps and instructions for use between diverse, network-connected computing devices. MCP presumes a client-server architecture, in which one entity, the client (the AI agent or a subsidiary program) sends requests to servers, which respond.
MCP clients operate within MCP hosts. Clients maintain a one-to-one connection with MCP servers, but multiple clients can run from the same MCP host. Therefore MCP hosts can draw data from multiple MCP servers simultaneously. MCP servers, in turn, can use API integrations to obtain data from additional sources.
What this means is that an AI agent can use MCP to connect to multiple servers at once — however, each connection takes place independently of every other connection. Think of a team of reporters at a newspaper, all contacting sources individually but then putting their information together to produce a news item.
There are four types of messages used in MCP:
MCP connections can be either remote or local. Remote connections take place between AI agents and MCP servers over the Internet. Local connections take place within the same machine (MCP clients and MCP servers are software programs running separately from each other).
There are three phases in MCP network communications:
To make MCP more secure, additional steps for authentication and authorization may take place prior to these three phases.
An MCP server is a program hosted on a server or in the cloud that exposes capabilities for AI agents to use via MCP. MCP servers can provide AI agents with access to new data sets or other tools that they need. For instance, an MCP server might allow an AI agent to use an email service, so that the agent can send emails on behalf of the human user it is assisting.
MCP does not have authentication, authorization, or encryption natively built in, so developers have to implement that themselves or use a service that assists with implementation.
MCP does not require the use of HTTPS — instead running over HTTP in many implementations. It therefore can lack encryption and authentication unless developers proactively implement Transport Layer Security (TLS) usage. MCP, like any networking protocol, can be vulnerable to impersonation or to on-path attacks if TLS is not used.
Because MCP offers similar functionality to an API (external parties requesting data and services), many of the major API security considerations also apply to MCP implementations. Organizations making MCP servers available must ensure that confidential data is not exposed, that resources are protected, that excessive requests are stopped by rate limiting, that AI agents do not have too many permissions, and that inputs are validated and sanitized.
Some MCP servers offer libraries to make OAuth implementation easier. Cloudflare provides an OAuth Provider Library that implements the provider side of the OAuth 2.1 protocol, allowing you to easily add authorization to your MCP server.
Developers can use this OAuth Provider Library in three ways:
Cloudflare makes several MCP servers available for use by developers building agentic AI. Cloudflare also enables developers to build and deploy their own MCP servers to support AI agents. Learn how to get started with MCP on Cloudflare.