๐ค AI Summary
Existing black-box, static-command approaches for information-stealing attacks against LLM-based tool-learning systems suffer from poor stealth and high detectability. Method: This paper proposes AutoCMD, a dynamic command generation framework grounded in the โimitate-the-familiarโ principle. AutoCMD models toolchain context, integrates pretraining with target-system exemplars via reinforcement learning, and applies dynamic prompt engineering to adaptively generate malicious commands conditioned on upstream tool dependencies. Contribution/Results: AutoCMD enables cross-toolchain reasoning and target-system-specific adaptation, significantly enhancing attack stealth and generalizability: it improves the information-theft attack success rate (ASR_Theft) by 13.2% and transfers effectively to unseen tool systems. Additionally, the paper validates four practical defense strategies, establishing a novel paradigm for co-evolutionary attack-defense research.
๐ Abstract
Information theft attacks pose a significant risk to Large Language Model (LLM) tool-learning systems. Adversaries can inject malicious commands through compromised tools, manipulating LLMs to send sensitive information to these tools, which leads to potential privacy breaches. However, existing attack approaches are black-box oriented and rely on static commands that cannot adapt flexibly to the changes in user queries and the invocation chain of tools. It makes malicious commands more likely to be detected by LLM and leads to attack failure. In this paper, we propose AutoCMD, a dynamic attack comment generation approach for information theft attacks in LLM tool-learning systems. Inspired by the concept of mimicking the familiar, AutoCMD is capable of inferring the information utilized by upstream tools in the toolchain through learning on open-source systems and reinforcement with target system examples, thereby generating more targeted commands for information theft. The evaluation results show that AutoCMD outperforms the baselines with +13.2% $ASR_{Theft}$, and can be generalized to new tool-learning systems to expose their information leakage risks. We also design four defense methods to effectively protect tool-learning systems from the attack.