Thought Leadership
March 2026 · Vincent Verdet

MCP Servers and Clients
Explained

Why the Model Context Protocol Changes How AI Connects to the World

9 min read

Introduction

If you have been working with AI tools recently, you may have noticed a new term showing up everywhere: MCP. It stands for Model Context Protocol, and it is quickly becoming the standard way for AI models to interact with external tools and data.

When I first came across MCP, my reaction was simple: "Why do we need another protocol? We already have APIs." That is a fair question. APIs have been connecting software systems for decades and they work well. So what problem does MCP actually solve?

The short answer: MCP solves the problem of connecting AI to everything else without writing custom code for every single connection. Think of it like USB for AI. Before USB, every device needed its own specific cable and port. MCP does the same thing for AI integrations: one standard that works everywhere.

Key Takeaway

MCP is an open protocol that gives AI models a standard way to access tools, data, and services. Instead of building custom integrations for every service, you build once against the MCP standard and it works with any compatible AI client.

The Problem MCP Solves

Let's say you want your AI assistant to do useful things: read files from your computer, query a database, search the web, or create tasks in your project management tool. With traditional API integrations, here is what happens:

1
Study the API documentation

Every service has its own API with its own authentication method, data format, and endpoint structure. You need to learn each one separately.

2
Write custom integration code

For each service, you write specific code to handle requests, parse responses, manage errors, and deal with authentication tokens.

3
Maintain it over time

APIs change. Versions get deprecated. Authentication flows get updated. Every integration you built now needs ongoing maintenance.

4
Repeat for every AI tool

If you switch from one AI platform to another, or want to use multiple AI tools, you might need to rebuild those integrations from scratch.

This is the "M times N" problem. If you have M AI tools and N services to connect to, you end up with M x N custom integrations. That does not scale. With 5 AI tools and 10 services, you are looking at 50 different integrations to build and maintain.

MCP turns this into an "M plus N" problem instead. Each AI tool implements the MCP client once. Each service implements the MCP server once. Done. 5 AI tools plus 10 services equals 15 implementations, not 50.

How MCP Actually Works

The architecture is straightforward. There are three parts: the host, the client, and the server.

The Host

This is the AI application you are using. It could be Claude Desktop, an IDE with AI features, or any application that uses an AI model. The host is what the user interacts with.

The Client

The client lives inside the host. It manages the connection to MCP servers, handles the protocol details, and passes information between the AI model and the servers. Each client connects to one server.

The Server

The server is a small program that exposes specific capabilities. It could give access to a file system, a database, a web search API, or anything else. The server describes what it can do, and the AI model decides when to use it.

Here is what a typical interaction looks like: You ask your AI assistant to "find all open issues in my GitHub repository." The AI model recognises it needs the GitHub MCP server. The client sends a request to that server. The server calls the GitHub API, gets the data, and sends it back through the client to the model. The model then formats a nice answer for you.

The important part: the AI model did not need to know anything specific about the GitHub API. It only needed to know that a "GitHub" tool was available and what it could do. The MCP server handled all the API-specific details.

What MCP Servers Can Expose

MCP servers can offer three types of capabilities to AI models:

Tools

Actions the AI can perform

Functions that do something: send a message, create a file, run a query, make an API call. The AI model decides when to call these based on what the user asks for.

Resources

Data the AI can read

Structured information the model can access: file contents, database records, configuration values. Similar to GET endpoints in a REST API, but with a standard format.

Prompts

Templates for common tasks

Pre-written prompt templates that help users interact with the server's capabilities. They guide the AI toward the best way to use the available tools and resources.

A single MCP server can expose any combination of these. A file system server might expose tools (create, delete, move files), resources (file contents), and prompts ("summarise this project's structure"). The AI model discovers all of this automatically when the server connects.

MCP vs Traditional API Integration

Let me be clear: MCP does not replace APIs. Most MCP servers actually use APIs behind the scenes. What MCP replaces is the custom glue code you write to connect AI models to those APIs.

Here is a practical comparison:

API
Traditional API integration

You write code that understands the specific API. Your code handles authentication, constructs the right HTTP requests, parses the responses, and manages errors. If you want your AI to use this API, you also need to write tool definitions, response formatting, and context management. Every API you add means more custom code.

MCP
MCP server integration

Someone builds an MCP server that wraps the API (this might be you, the API provider, or the community). You point your AI tool at that server. The AI automatically discovers what the server can do and starts using it. If you switch AI tools, the same server keeps working.

Where MCP Has a Clear Advantage

Reusability

Build an MCP server once, use it with Claude, with Cursor, with any MCP-compatible tool. No rewiring needed.

Discovery

AI models can automatically learn what tools are available and how to use them. With raw APIs, you have to manually describe each capability to the model.

Standardisation

One protocol, one way to describe tools, one way to handle requests and responses. No more learning a different pattern for every service.

Security model

MCP includes built-in patterns for controlling what the AI can access. The user stays in control and can approve or deny specific actions.

Where APIs Still Make More Sense

Not everything needs MCP. If you are building a traditional application where the backend calls another service directly, a regular API call is simpler and more appropriate. MCP is specifically designed for the AI-to-service connection. It adds value when an AI model needs to dynamically discover and use external capabilities.

Important

MCP and APIs are not competing standards. MCP sits on top of existing APIs. Think of it as a translation layer that helps AI models use APIs without needing custom code for each one.

What This Looks Like in Practice

To make this concrete, here are a few scenarios where MCP makes a real difference:

1
Developer working with databases

Instead of copying data out of your database and pasting it into a chat, you connect a database MCP server. Your AI assistant can now query the database directly, understand the schema, and help you write better queries. The data stays where it is. You just gave your AI the ability to look at it.

2
Team using multiple tools

Your team uses GitHub for code, Linear for tasks, and Slack for communication. With MCP servers for each, your AI assistant can check the status of a pull request, find the related task, and post an update to the right channel. All from a single conversation.

3
Company building internal tools

You have internal APIs that only your company uses. By wrapping them in MCP servers, every employee with an AI assistant can access those systems naturally, through conversation. No training needed on specific API endpoints or tools.

The Enterprise Architecture Perspective

If you work in Enterprise Architecture, you are probably reading this and thinking: "Interesting technology, but where does it fit in my reference architecture?" Good question. MCP servers deserve a place in your IT component catalogue, and here is why.

MCP Servers Are IT Components

An MCP server is a deployable piece of software that connects an AI model to a specific system or capability. It has dependencies, it needs hosting, it has a lifecycle. That makes it an IT component, just like a microservice, an API gateway, or a middleware adapter.

And yet, most organisations today treat MCP servers as informal tools that developers install on their laptops. Nobody tracks them. Nobody governs them. Nobody knows which MCP servers connect to which production systems. From an EA standpoint, that is a blind spot.

Application Portfolio

Each MCP server should be registered as a component in your application portfolio. Document what system it connects to, who maintains it, and what data it can access. Treat it like you would treat any integration adapter.

Data Architecture

MCP servers give AI models access to your data. That means they fall under your data governance policies. Which data classifications can flow through MCP? Who approves access? These questions need answers before you scale.

Security Architecture

Every MCP server is an access point. It authenticates against backend systems, often with service accounts or API keys. Your security team should review and approve these connections like they would for any integration.

Integration Architecture

MCP introduces a new integration pattern: AI-to-system. This sits alongside your existing patterns (API-to-API, event-driven, batch). It needs its own standards, naming conventions, and deployment guidelines.

Placing MCP in Your Technology Roadmap

Most organisations are somewhere between "we have not thought about this" and "a few developers are using MCP servers on their own." Here is a realistic progression to consider for your roadmap:

Q1
Discovery and inventory

Find out what MCP servers are already in use across your organisation. Developers tend to adopt these tools before IT governance catches up. Identify which production systems are being accessed and through which servers. Build an initial inventory.

Q2
Standards and governance

Define a standard for how MCP servers should be built, deployed, and maintained in your organisation. Decide on approved MCP servers (community or custom-built), authentication patterns, data access policies, and hosting requirements. Add MCP to your integration architecture documentation.

Q3
Controlled rollout

Deploy approved MCP servers to specific teams or use cases. Start with read-only access to low-sensitivity systems. Measure adoption, collect feedback, and refine your standards based on what you learn.

Q4
Scale and optimise

Expand to more systems and more teams. Consider building custom MCP servers for internal systems that would benefit from AI access. Establish a shared MCP server registry so teams can discover and reuse existing servers instead of building duplicates.

Why This Matters Now

AI tools are already part of your technology landscape, whether you planned for them or not. Developers are using AI assistants daily. Some of them have already connected those assistants to company systems through MCP. The choice is not whether MCP will be part of your architecture. The choice is whether you govern it proactively or discover it reactively after an incident.

The good news: MCP is a standard protocol with clear boundaries. It is far easier to govern than ad-hoc scripts or custom integrations that developers build on their own. By recognising MCP servers as proper IT components early, you set the foundation for safe and scalable AI adoption across your organisation.

EA Recommendation

Add "MCP Server" as a component type in your application portfolio today. Even if you only have two or three servers in use, establishing the pattern now saves you from a chaotic inventory exercise later when adoption grows.

Getting Started

If you want to try MCP, the easiest path is to start as a user rather than a builder.

  1. Pick an AI tool that supports MCP. Claude Desktop is a good starting point since Anthropic created the protocol, but many tools now support it.
  2. Install a community MCP server. There are servers available for file systems, databases, GitHub, web search, and many other services. Most of them are straightforward to set up.
  3. Use it. Once connected, just ask your AI assistant to do things that require the server's capabilities. It will figure out when and how to use the tools.

If you want to build your own MCP server, the specification is open and well documented. The SDKs are available in Python and TypeScript, and building a basic server takes less than an hour if you already know what API you want to wrap.

Pro Tip

Start with a simple, read-only MCP server. For example, one that exposes data from an internal system. This lets you learn the protocol without worrying about write operations or security implications.

The Bottom Line

MCP exists because AI models need a standard way to interact with the outside world, and building custom integrations for every combination of AI tool and external service simply does not work at scale.

The protocol is not trying to replace REST APIs or GraphQL. It is adding a layer on top that makes it practical for AI models to use those services without requiring custom code for each one.

Whether you are a developer, an architect, or someone who just uses AI tools daily, MCP is worth understanding. It is still early, and the ecosystem is growing fast. The organisations that figure out how to use this well will have AI assistants that are genuinely useful, not just clever chatbots that can only work with text.

Where To Go From Here

Install one MCP server. Use it for a week. See what changes in your workflow. That hands-on experience is worth more than any article, including this one.

Comments

Share your experience with MCP or ask questions below.