Fox in the Henhouse: Supply-Chain Backdoor Attacks Against Reinforcement Learning

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing reinforcement learning (RL) backdoor attacks rely on unrealistic, strong access assumptions—such as read-write access to policy parameters or internal agent states. Method: This paper identifies a novel supply-chain backdoor threat in RL: poisoning merely 3% of interaction experiences from externally provided agents (e.g., environment-embedded proxies) suffices to trigger malicious behavior with >90% success rate—without any access to the victim’s policy parameters or internal state. We propose Supply-Chain Agent Backdooring (SCAB), the first backdoor attack paradigm leveraging only legitimate, external interactions—bypassing privilege constraints while ensuring lightweightness, stealth, and practical feasibility. Contribution/Results: Evaluated on standard RL benchmarks, SCAB reduces average episode return by 80%, empirically demonstrating that untrusted third-party RL components pose a tangible and severe security threat to real-world RL deployments.

Technology Category

Application Category

📝 Abstract
The current state-of-the-art backdoor attacks against Reinforcement Learning (RL) rely upon unrealistically permissive access models, that assume the attacker can read (or even write) the victim's policy parameters, observations, or rewards. In this work, we question whether such a strong assumption is required to launch backdoor attacks against RL. To answer this question, we propose the underline{S}upply-underline{C}hunderline{a}in underline{B}ackdoor (SCAB) attack, which targets a common RL workflow: training agents using external agents that are provided separately or embedded within the environment. In contrast to prior works, our attack only relies on legitimate interactions of the RL agent with the supplied agents. Despite this limited access model, by poisoning a mere $3%$ of training experiences, our attack can successfully activate over $90%$ of triggered actions, reducing the average episodic return by $80%$ for the victim. Our novel attack demonstrates that RL attacks are likely to become a reality under untrusted RL training supply-chains.
Problem

Research questions and friction points this paper is trying to address.

Examining feasibility of backdoor attacks with limited attacker access
Proposing supply-chain backdoor attack targeting RL training workflows
Demonstrating high attack success rates with minimal poisoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

SCAB attack targets RL supply-chain workflows
Uses only legitimate RL agent interactions
Achieves high success with minimal poisoning
🔎 Similar Papers
No similar papers found.
S
Shijie Liu
School of Computing and Information Systems, University of Melbourne, Melbourne, Australia
A
A. C. Cullen
School of Computing and Information Systems, University of Melbourne, Melbourne, Australia
P
Paul Montague
Defence Science and Technology Group, Adelaide, Australia
S
Sarah Erfani
School of Computing and Information Systems, University of Melbourne, Melbourne, Australia
Benjamin I. P. Rubinstein
Benjamin I. P. Rubinstein
Professor, School of Computing and Information Systems, The University of Melbourne
Artificial IntelligenceDifferential PrivacyAdversarial Machine Learning