🤖 AI Summary
This study addresses a critical yet overlooked security threat in large language model (LLM) agents: attacks stemming from malicious implementations of tool code. We present MalTool, the first systematic framework for this threat, which leverages a safety-aligned code-generation LLM guided by a confidentiality-integrity-availability triad to synthesize diverse and functionally correct malicious tools. Through an automated verification and iterative refinement mechanism, MalTool generates both standalone malicious tools and variants embedded within legitimate ones. We introduce the first behavior-based taxonomy of malicious tool actions targeting LLM agents and produce 1,200 unique malicious tools alongside 5,287 realistic variants. Experimental evaluation demonstrates that state-of-the-art detection systems, including VirusTotal, exhibit alarmingly low detection rates against these attacks, exposing severe limitations in current defensive measures.
📝 Abstract
In a malicious tool attack, an attacker uploads a malicious tool to a distribution platform; once a user installs the tool and the LLM agent selects it during task execution, the tool can compromise the user's security and privacy. Prior work primarily focuses on manipulating tool names and descriptions to increase the likelihood of installation by users and selection by LLM agents. However, a successful attack also requires embedding malicious behaviors in the tool's code implementation, which remains largely unexplored. In this work, we bridge this gap by presenting the first systematic study of malicious tool code implementations. We first propose a taxonomy of malicious tool behaviors based on the confidentiality-integrity-availability triad, tailored to LLM-agent settings. To investigate the severity of the risks posed by attackers exploiting coding LLMs to automatically generate malicious tools, we develop MalTool, a coding-LLM-based framework that synthesizes tools exhibiting specified malicious behaviors, either as standalone tools or embedded within otherwise benign implementations. To ensure functional correctness and structural diversity, MalTool leverages an automated verifier that validates whether generated tools exhibit the intended malicious behaviors and differ sufficiently from prior instances, iteratively refining generations until success. Our evaluation demonstrates that MalTool is highly effective even when coding LLMs are safety-aligned. Using MalTool, we construct two datasets of malicious tools: 1,200 standalone malicious tools and 5,287 real-world tools with embedded malicious behaviors. We further show that existing detection methods, including commercial malware detection approaches such as VirusTotal and methods tailored to the LLM-agent setting, exhibit limited effectiveness at detecting the malicious tools, highlighting an urgent need for new defenses.