Modern AI systems, especially large language models (LLMs), are no longer isolated engines responding to static inputs. They’re evolving into intelligent agents, copilots, and autonomous systems that interact with their environment, reason over external data, and adapt in real time.

But there’s a fundamental problem:

LLMs are powerful, but they don’t know anything outside of what they were trained on.

Even the most advanced model can’t:

  • Access your company’s latest data
  • React to real-time events
  • Stay up to date with your tools or APIs

To become useful assistants, these models need context: live documents, system state, chat history, user behavior, business logic, etc.

The New Stack: Context-Driven AI

To fill that gap, a new class of systems emerged that deliver live context to AI models. These include:

  • Retrieval-Augmented Generation (RAG): Search for relevant documents and inject them into the model’s input
  • Agent frameworks: Let AI agents recall memory, query tools, and plan over time
  • Tool use and function calling: Extend models with external APIs

All of these require an interface between the model and the world — a protocol to request and deliver context, memory, or actions.

This is where protocols like Model Context Protocol (MCP) and Agent2Agent come into play.

They’re not just developer conveniences. They’re the beginning of a new layer in the AI stack, where context becomes infrastructure, and where security risks emerge that weren’t even possible in traditional model deployments.

What Is MCP, and How is it Used?

Model Context Protocol (MCP) is an emerging open protocol designed to allow language models and AI agents to request, fetch, and manage contextual information dynamically at inference time. It enables AI systems to operate with real-time, external memory, enhancing their capabilities well beyond their training data.

Think of MCP as the middleware layer that lets an LLM or agent say:

“Here’s the user’s query, go get me everything I need to answer it accurately”

It defines how to standardize the retrieval, enrichment, and delivery of context between AI models and context servers like vector databases, file systems, knowledge graphs, or live APIs.

Architecture of MCP Servers

A flowchart with the Upwind logo showing data flow: User Input → AI Model/Agent → MCP Server, which branches to Vector DB, Docs/APIs, and Metadata Sources.

MCP Client: Runs alongside the model and knows how to request needed context.

MCP Server: Central context provider. Takes a query, retrieves the most relevant pieces, formats them, and sends them back.

Connectors: The server connects to a variety of sources e.g., vector DBs, web APIs, S3, SQL databases through plug-ins or adapters.

Core Risks of MCP Servers

The MCP server introduces a new layer in the AI stack, one that connects models to internal systems. It doesn’t store data or run the model, but it acts as the bridge between them. And it’s in this bridging role, where context is fetched and injected, that new security considerations start to emerge.

Key threat dimensions:

  • Context Poisoning: Attackers can manipulate upstream data (e.g., documents, tickets, database entries) to influence LLM outputs without touching the model itself.
  • Insecure Connectors: MCPs often integrate with many internal systems. If compromised, they can be used to pivot into those systems using stored credentials or open API access.
  • Lack of Authentication/Authorization: implementations that don’t verify the identity or permissions of the requester, will allow any service or user to pull internal context.
  • Shadow Access: Over time, teams will connect MCPs to more and more sources, creating an undocumented and often unknown mesh of access paths across the environment.
  • Supply Chain Risk: Using open-source or internal MCP implementations without security review can introduce unverified code that handles sensitive traffic.

Threat actors targeting MCP servers may exploit several paths to gain access or manipulate behavior. A particularly subtle vector is prompt injection; attackers can insert hidden instructions into upstream data like tickets or documents, which then feeds to the model. This allows manipulation of model output, potential data leakage, or triggering of unintended actions, especially in autonomous setups. By compromising the MCP or the systems it pulls from, attackers gain a quiet but powerful foothold for data exfiltration, lateral movement, or model misuse.

Securing MCP Servers with Upwind

As MCP servers become a critical part of the AI stack, connecting internal data sources with large language models, they also introduce new security blind spots. Without the right tools, it’s easy for these services to go unnoticed, misconfigured, or vulnerable. Upwind is designed to solve this, giving teams the visibility and context they need to secure MCP infrastructure without slowing development.

Screenshot of the Upwind security platform showing a visual map of a Google Cloud cluster, with icons representing an mcp-agent connected to a Kubelet, and detailed information about the mcp-agent on the right.

Upwind’s research team will soon publish new findings to help you understand and mitigate emerging threats introduced by AI inter-communication platforms like MCP. Our focus will span across key security domains, from shift-left vulnerability management, to CSPM and runtime-powered risk analysis, and all the way to the right with real-time threat detection.

Upwind customers can use the following modules to address these emerging risks now:

  • Automatic MCP Identification: Upwind continuously scans your environment and tags services acting as MCPs, eliminating shadow infrastructure and helping you maintain an accurate inventory.
  • Behavioral Visibility: By analyzing network flows and access patterns, Upwind gives you real-time insights into how MCP servers interact with internal systems — so you know what they’re pulling, from where, and when.
  • Vulnerability Detection: Whether you’re running open-source MCPs or custom-built versions, Upwind surfaces any known vulnerabilities introduced by their code or dependencies in a centralized vulnerability dashboard.
  • API Threat Detection: Upwind’s API-level detection monitors traffic to and from MCP servers, alerting you to suspicious requests.

Upwind brings the same observability and security posture that modern teams expect for cloud-native workloads, now extended to the emerging layer of model context infrastructure.