🤖 AI Summary
This work exposes an implicit trust security vulnerability in large language model (LLM)-based agent systems when integrating third-party tools via the Model Context Protocol (MCP): adversaries can stealthily hijack agent computational resources using malicious tool plugins—without privilege escalation—to execute unauthorized tasks. To address this, we introduce the novel concept of “implicit toxicity” and propose a two-stage resource hijacking mechanism that integrates trigger-based backdoor injection, command-and-control (C2) communication, and payload obfuscation to embed tasks and establish covert control channels within legitimate permission boundaries. We evaluate the attack across four major LLM families; results show an average success rate of 77.25% with only 18.62% additional resource overhead, confirming high stealthiness and broad applicability. Our findings highlight the critical security gap of missing computational provenance in the MCP ecosystem.
📝 Abstract
Large Language Model (LLM)-based agents have demonstrated remarkable capabilities in reasoning, planning, and tool usage. The recently proposed Model Context Protocol (MCP) has emerged as a unifying framework for integrating external tools into agent systems, enabling a thriving open ecosystem of community-built functionalities. However, the openness and composability that make MCP appealing also introduce a critical yet overlooked security assumption -- implicit trust in third-party tool providers. In this work, we identify and formalize a new class of attacks that exploit this trust boundary without violating explicit permissions. We term this new attack vector implicit toxicity, where malicious behaviors occur entirely within the allowed privilege scope. We propose LeechHijack, a Latent Embedded Exploit for Computation Hijacking, in which an adversarial MCP tool covertly expropriates the agent's computational resources for unauthorized workloads. LeechHijack operates through a two-stage mechanism: an implantation stage that embeds a benign-looking backdoor in a tool, and an exploitation stage where the backdoor activates upon predefined triggers to establish a command-and-control channel. Through this channel, the attacker injects additional tasks that the agent executes as if they were part of its normal workflow, effectively parasitizing the user's compute budget. We implement LeechHijack across four major LLM families. Experiments show that LeechHijack achieves an average success rate of 77.25%, with a resource overhead of 18.62% compared to the baseline. This study highlights the urgent need for computational provenance and resource attestation mechanisms to safeguard the emerging MCP ecosystem.