Skip to content

Journey from AI to LLMs and MCP - 6 - Enter the Model Context Protocol (MCP) — The Interoperability Layer for AI Agents

Published: at 09:00 AM

Free Resources

We’ve spent the last few posts exploring the growing power of AI agents—how they can reason, plan, and take actions across complex tasks. And we’ve looked at the frameworks that help us build these agents. But if you’ve worked with them, you’ve likely hit a wall:

What if we had a standard that let any agent talk to any data source or tool, regardless of where it lives or what it’s built with?

That’s exactly what the Model Context Protocol (MCP) brings to the table.

And if you’re from the data engineering world, MCP is to AI agents what the Apache Iceberg REST protocol is to analytics:

A universal, pluggable interface that enables many clients to interact with many servers—without tight coupling.

What Is the Model Context Protocol (MCP)?

MCP is an open protocol that defines how LLM-powered applications (like agents, IDEs, or copilots) can access context, tools, and actions in a standardized way.

Think of it as the “interface layer” between:

It defines a common language for exchanging:

This allows you to plug in new capabilities without rearchitecting your agent or retraining your model.

🧱 How MCP Mirrors Apache Iceberg’s REST Protocol

Let’s draw the parallel:

ConceptApache Iceberg RESTModel Context Protocol (MCP)
Standardized APIREST endpoints for table opsJSON-RPC messages for context/tools
Decouples client/serverAny engine ↔ any Iceberg catalogAny LLM/agent ↔ any tool or data backend
Multi-client supportSpark, Trino, Flink, DremioClaude, custom agents, IDEs, terminals
Pluggable backendsS3, HDFS, Minio, Pure Storage, GCSFilesystem, APIs, databases, web services
Interoperable toolingREST = portable across ecosystemsMCP = portable across LLM environments

Just as Iceberg REST made it possible for Dremio to talk to a table created in Snowflake, MCP allows a tool exposed in Python on your laptop to be used by an LLM in Claude Desktop, a VS Code agent, or even a web-based chatbot.

🔁 MCP in Action — A Real-World Use Case

Imagine this workflow:

  1. You’re coding in an IDE powered by an AI assistant
  2. The model wants to read your logs and run some shell scripts
  3. Your data lives locally, and your tools are custom-built in Python

With MCP:

And tomorrow, you could replace that assistant with a different model or switch to a browser-based environment—and everything would still work.

The Core Components of MCP

Let’s break down the architecture:

1. Hosts

These are environments where the LLM application lives (e.g., Claude Desktop, your IDE). They manage connections to MCP clients.

2. Clients

Embedded in the host, each client maintains a connection to a specific server. It speaks MCP’s message protocol and exposes capabilities upstream to the model.

3. Servers

Programs that expose capabilities like:

Servers can live anywhere: locally on your machine, behind an API, or running in a cloud environment.

What Can MCP Servers Do?

And all of this is done in a protocol-agnostic, secure, pluggable format.

Why This Matters

With MCP, we finally get interoperability in the AI stack—a shared interface layer between:

It gives us:

In short, MCP helps you go from monolithic, tangled agents to modular, composable AI systems.

What’s Next: Diving Deeper into MCP Internals

In the next few posts, we’ll dig into each part of MCP: