🤖 AI Summary
This work addresses a critical security gap in the Model Context Protocol (MCP), where weakened behavioral constraints—introduced to accommodate compatibility requirements—create exploitable compliance vulnerabilities across multilingual SDK implementations, enabling a novel class of “compatibility abuse attacks.” The paper presents the first formalization of this attack model and introduces the first cross-language MCP compliance analysis framework. By constructing a language-agnostic intermediate representation and integrating LLM-guided static semantic analysis with formal attack modeling, the framework enables automated vulnerability discovery. Empirical evaluation demonstrates the feasibility of real-world attacks—including silent prompt injection and denial-of-service—against mainstream MCP SDKs, uncovering multiple high-severity non-compliance issues.
📝 Abstract
The Model Context Protocol (MCP) is a recently proposed interoperability standard that unifies how AI agents connect with external tools and data sources. By defining a set of common client-server message exchange clauses, MCP replaces fragmented integrations with a standardized, plug-and-play framework. However, to be compatible with diverse AI agents, the MCP specification relaxes many behavioral constraints into optional clauses, leading to misuse-prone SDK implementation. We identify it as a new attack surface that allows adversaries to achieve multiple attacks (e.g, silent prompt injection, DoS, etc.), named as \emph{compatibility-abusing attacks}. In this work, we present the first systematic framework for analyzing this new attack surface across multi-language MCP SDKs. First, we construct a universal and language-agnostic intermediate representation (IR) generator that normalizes SDKs of different languages. Next, based on the new IR, we propose auditable static analysis with LLM-guided semantic reasoning for cross-language/clause compliance analysis. Third, by formalizing the attack semantics of the MCP clauses, we build three attack modalities and develop a modality-guided pipeline to uncover exploitable non-compliance issues.