π€ AI Summary
This work proposes Daze, a novel backdoor attack against reinforcement learning systems that exploits compromised simulators to stealthily inject action-level backdoors into agents without altering or accessing the reward signal. By manipulating the simulatorβs dynamics model, Daze circumvents the strong assumption of full control over the training pipeline required by prior methods, marking the first successful demonstration of a reward-agnostic backdoor attack in reinforcement learning. Crucially, the attack transfers effectively from simulation to real-world robotic platforms, demonstrating high success rates and strong stealth in both discrete and continuous action spaces. These findings establish the feasibility of simulator-level backdoors and underscore the critical importance of simulator integrity for the security of deployed reinforcement learning systems.
π Abstract
Simulated environments are a key piece in the success of Reinforcement Learning (RL), allowing practitioners and researchers to train decision making agents without running expensive experiments on real hardware. Simulators remain a security blind spot, however, enabling adversarial developers to alter the dynamics of their released simulators for malicious purposes. Therefore, in this work we highlight a novel threat, demonstrating how simulator dynamics can be exploited to stealthily implant action-level backdoors into RL agents. The backdoor then allows an adversary to reliably activate targeted actions in an agent upon observing a predefined ``trigger'', leading to potentially dangerous consequences. Traditional backdoor attacks are limited in their strong threat models, assuming the adversary has near full control over an agent's training pipeline, enabling them to both alter and observe agent's rewards. As these assumptions are infeasible to implement within a simulator, we propose a new attack ``Daze''which is able to reliably and stealthily implant backdoors into RL agents trained for real world tasks without altering or even observing their rewards. We provide formal proof of Daze's effectiveness in guaranteeing attack success across general RL tasks along with extensive empirical evaluations on both discrete and continuous action space domains. We additionally provide the first example of RL backdoor attacks transferring to real, robotic hardware. These developments motivate further research into securing all components of the RL training pipeline to prevent malicious attacks.