Your Agent Can Defend Itself against Backdoor Attacks

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language model (LLM) agents are vulnerable to backdoor attacks during training or fine-tuning, causing malicious behavior upon trigger activation. To address this, we propose ReAgent—the first training-free, clean-data-free online self-defense mechanism specifically designed for LLM agents. Its core innovation lies in leveraging the agent’s intrinsic reasoning capabilities to perform dual-layer consistency self-checking: (i) execution-level consistency between selected actions and chain-of-thought (CoT) reasoning, and (ii) planning-level verification via instruction reverse reconstruction and multi-step reasoning validation, augmented by dynamic trigger sensitivity analysis. ReAgent requires no auxiliary models, external supervision, or labeled data, enabling real-time detection and mitigation of backdoor behaviors. Extensive experiments on database operation tasks demonstrate that ReAgent reduces backdoor attack success rates by up to 90%, significantly outperforming existing defense methods.

Technology Category

Application Category

📝 Abstract
Despite their growing adoption across domains, large language model (LLM)-powered agents face significant security risks from backdoor attacks during training and fine-tuning. These compromised agents can subsequently be manipulated to execute malicious operations when presented with specific triggers in their inputs or environments. To address this pressing risk, we present ReAgent, a novel defense against a range of backdoor attacks on LLM-based agents. Intuitively, backdoor attacks often result in inconsistencies among the user's instruction, the agent's planning, and its execution. Drawing on this insight, ReAgent employs a two-level approach to detect potential backdoors. At the execution level, ReAgent verifies consistency between the agent's thoughts and actions; at the planning level, ReAgent leverages the agent's capability to reconstruct the instruction based on its thought trajectory, checking for consistency between the reconstructed instruction and the user's instruction. Extensive evaluation demonstrates ReAgent's effectiveness against various backdoor attacks across tasks. For instance, ReAgent reduces the attack success rate by up to 90% in database operation tasks, outperforming existing defenses by large margins. This work reveals the potential of utilizing compromised agents themselves to mitigate backdoor risks.
Problem

Research questions and friction points this paper is trying to address.

Defends LLM agents against training-time backdoor attacks
Detects inconsistencies in agent planning and execution
Reduces attack success rates by up to 90%
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-level consistency check for backdoor detection
Execution-level thought-action verification
Planning-level instruction reconstruction validation
🔎 Similar Papers