🤖 AI Summary
This work addresses the vulnerability of large language model (LLM) agents in autonomous systems to indirect prompt injection attacks, wherein adversaries embed malicious instructions within tool-call responses to hijack agent decision-making. The authors propose a lightweight defense mechanism that requires no additional training, leveraging structured parsing and content filtering of tool outputs to precisely excise adversarial payloads while preserving legitimate information. By fully exploiting the native capabilities of the LLM, the method maintains state-of-the-art utility under attack (UA) while reducing attack success rate (ASR) to the lowest level reported to date, significantly outperforming existing approaches based on detection models or prompt engineering.
📝 Abstract
As LLM agents transition from digital assistants to physical controllers in autonomous systems and robotics, they face an escalating threat from indirect prompt injection. By embedding adversarial instructions into the results of tool calls, attackers can hijack the agent's decision-making process to execute unauthorized actions. This vulnerability poses a significant risk as agents gain more direct control over physical environments. Existing defense mechanisms against Indirect Prompt Injection (IPI) generally fall into two categories. The first involves training dedicated detection models; however, this approach entails high computational overhead for both training and inference, and requires frequent updates to keep pace with evolving attack vectors. Alternatively, prompt-based methods leverage the inherent capabilities of LLMs to detect or ignore malicious instructions via prompt engineering. Despite their flexibility, most current prompt-based defenses suffer from high Attack Success Rates (ASR), demonstrating limited robustness against sophisticated injection attacks. In this paper, we propose a novel method that provides LLMs with precise data via tool result parsing while effectively filtering out injected malicious code. Our approach achieves competitive Utility under Attack (UA) while maintaining the lowest Attack Success Rate (ASR) to date, significantly outperforming existing methods. Code is available at GitHub.