BackdoorAgent: A Unified Framework for Backdoor Attacks on LLM-based Agents

📅 2026-01-08
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of systematic understanding regarding the cross-stage propagation mechanisms of backdoor attacks within multi-stage workflows of large language model (LLM) agents. We propose the first agent-centric unified analysis framework that decomposes agent workflows into three phases: planning, memory, and tool use. By integrating phase-aware modeling, trigger injection, and tracking techniques, our framework systematically demonstrates that a backdoor implanted in a single stage can persistently activate across multiple steps and influence downstream outputs. Leveraging this framework, we establish standardized benchmarks for both language and multimodal settings, revealing that on GPT-family base models, trigger persistence rates reach 43.58%, 77.97%, and 60.28% in the planning, memory, and tool-use stages, respectively—highlighting the inherent vulnerability of agent workflows to backdoor threats.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) agents execute tasks through multi-step workflows that combine planning, memory, and tool use. While this design enables autonomy, it also expands the attack surface for backdoor threats. Backdoor triggers injected into specific stages of an agent workflow can persist through multiple intermediate states and adversely influence downstream outputs. However, existing studies remain fragmented and typically analyze individual attack vectors in isolation, leaving the cross-stage interaction and propagation of backdoor triggers poorly understood from an agent-centric perspective. To fill this gap, we propose \textbf{BackdoorAgent}, a modular and stage-aware framework that provides a unified, agent-centric view of backdoor threats in LLM agents. BackdoorAgent structures the attack surface into three functional stages of agentic workflows, including \textbf{planning attacks}, \textbf{memory attacks}, and \textbf{tool-use attacks}, and instruments agent execution to enable systematic analysis of trigger activation and propagation across different stages. Building on this framework, we construct a standardized benchmark spanning four representative agent applications: \textbf{Agent QA}, \textbf{Agent Code}, \textbf{Agent Web}, and \textbf{Agent Drive}, covering both language-only and multimodal settings. Our empirical analysis shows that \textit{triggers implanted at a single stage can persist across multiple steps and propagate through intermediate states.} For instance, when using a GPT-based backbone, we observe trigger persistence in 43.58\% of planning attacks, 77.97\% of memory attacks, and 60.28\% of tool-stage attacks, highlighting the vulnerabilities of the agentic workflow itself to backdoor threats. To facilitate reproducibility and future research, our code and benchmark are publicly available at GitHub.
Problem

Research questions and friction points this paper is trying to address.

backdoor attacks
LLM agents
trigger propagation
agent workflows
security threats
Innovation

Methods, ideas, or system contributions that make the work stand out.

BackdoorAgent
LLM-based agents
cross-stage propagation
modular attack framework
trigger persistence
🔎 Similar Papers
No similar papers found.
Y
Yunhao Feng
Fudan University, Alibaba Group
Yige Li
Yige Li
Singapore Management University
Trustworthy Machine Learning
Y
Yutao Wu
Deakin University
Y
Yingshui Tan
Alibaba Group, Fudan University
Yanming Guo
Yanming Guo
National University of Defense Technology
deep learningcomputer vision
Y
Yifan Ding
Fudan University, Alibaba Group
K
Kun Zhai
Fudan University
Xingjun Ma
Xingjun Ma
Fudan University
Trustworthy AIMultimodal AIGenerative AIEmbodied AI
Y
Yugang Jiang
Fudan University