Malice in Agentland: Down the Rabbit Hole of Backdoors in the AI Supply Chain

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a novel supply-chain security threat in AI agents: fine-tuning on self-interacted data (e.g., web browsing, tool invocations) enables attackers to implant stealthy backdoors by poisoning the data collection pipeline—causing agents to execute malicious actions (e.g., privacy leakage) upon encountering specific trigger phrases. We systematically formulate and empirically validate three realistic threat models—data poisoning, environment poisoning, and supply-chain poisoning—spanning the entire agent training lifecycle. Experiments demonstrate that injecting only 2% poisoned data achieves >80% backdoor activation success across mainstream agent models, while evading both state-of-the-art guardrail models and weight-level defenses. Our contribution is twofold: (1) we establish a critical, previously overlooked vulnerability in agent fine-tuning; and (2) we introduce the first benchmark framework for backdoor attacks targeting agent-centric AI supply chains—highlighting the urgent need to redesign defensive architectures.

Technology Category

Application Category

📝 Abstract
The practice of fine-tuning AI agents on data from their own interactions--such as web browsing or tool use--, while being a strong general recipe for improving agentic capabilities, also introduces a critical security vulnerability within the AI supply chain. In this work, we show that adversaries can easily poison the data collection pipeline to embed hard-to-detect backdoors that are triggerred by specific target phrases, such that when the agent encounters these triggers, it performs an unsafe or malicious action. We formalize and validate three realistic threat models targeting different layers of the supply chain: 1) direct poisoning of fine-tuning data, where an attacker controls a fraction of the training traces; 2) environmental poisoning, where malicious instructions are injected into webpages scraped or tools called while creating training data; and 3) supply chain poisoning, where a pre-backdoored base model is fine-tuned on clean data to improve its agentic capabilities. Our results are stark: by poisoning as few as 2% of the collected traces, an attacker can embed a backdoor causing an agent to leak confidential user information with over 80% success when a specific trigger is present. This vulnerability holds across all three threat models. Furthermore, we demonstrate that prominent safeguards, including two guardrail models and one weight-based defense, fail to detect or prevent the malicious behavior. These findings highlight an urgent threat to agentic AI development and underscore the critical need for rigorous security vetting of data collection processes and end-to-end model supply chains.
Problem

Research questions and friction points this paper is trying to address.

AI agents' fine-tuning data can be poisoned to embed undetectable backdoors
Backdoors trigger unsafe actions when specific phrases are encountered by agents
Current security measures fail to detect or prevent these supply chain attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Poisoning data pipeline to embed hidden backdoors
Formalizing three realistic threat models
Demonstrating failure of prominent safeguard defenses