MCPShield: A Security Cognition Layer for Adaptive Trust Calibration in Model Context Protocol Agents

📅 2026-02-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses critical security vulnerabilities in large language model (LLM) agents that invoke third-party tools via the Model Context Protocol (MCP), where blind trust in unverified servers and the absence of end-to-end validation mechanisms expose systems to novel threats. To mitigate these risks, we propose MCPShield—a pluggable security cognition layer that, for the first time, integrates human experience–driven verification into MCP-based agents. MCPShield establishes an adaptive security framework spanning the entire tool invocation lifecycle: pre-call metadata-guided probing, in-call runtime event monitoring with enforced execution boundaries, and post-call reasoning based on historical trajectories. Extensive experiments demonstrate that MCPShield effectively defends against six emerging classes of MCP attacks across six mainstream LLM agents, achieving zero false positives, minimal overhead, and strong generalization, thereby offering significant practical utility.

Technology Category

Application Category

📝 Abstract
The Model Context Protocol (MCP) standardizes tool use for LLM-based agents and enable third-party servers. This openness introduces a security misalignment: agents implicitly trust tools exposed by potentially untrusted MCP servers. However, despite its excellent utility, existing agents typically offer limited validation for third-party MCP servers. As a result, agents remain vulnerable to MCP-based attacks that exploit the misalignment between agents and servers throughout the tool invocation lifecycle. In this paper, we propose MCPShield as a plug-in security cognition layer that mitigates this misalignment and ensures agent security when invoking MCP-based tools. Drawing inspiration from human experience-driven tool validation, MCPShield assists agent forms security cognition with metadata-guided probing before invocation. Our method constrains execution within controlled boundaries while cognizing runtime events, and subsequently updates security cognition by reasoning over historical traces after invocation, building on human post-use reflection on tool behavior. Experiments demonstrate that MCPShield exhibits strong generalization in defending against six novel MCP-based attack scenarios across six widely used agentic LLMs, while avoiding false positives on benign servers and incurring low deployment overhead. Overall, our work provides a practical and robust security safeguard for MCP-based tool invocation in open agent ecosystems.
Problem

Research questions and friction points this paper is trying to address.

Model Context Protocol
LLM agents
security misalignment
third-party tool trust
MCP-based attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

MCPShield
security cognition
Model Context Protocol
adaptive trust calibration
tool validation
🔎 Similar Papers
No similar papers found.