CS-GBA: A Critical Sample-based Gradient-guided Backdoor Attack for Offline Reinforcement Learning

📅 2026-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Offline reinforcement learning is vulnerable to backdoor attacks on static datasets, yet existing methods exhibit low attack efficacy under safety-constrained algorithms such as Conservative Q-Learning (CQL) and are readily detectable by out-of-distribution (OOD) detection mechanisms. This work proposes the CS-GBA framework, which achieves highly stealthy and destructive backdoor attacks under a stringent poisoning budget of only 5%. CS-GBA leverages adaptive selection of high temporal-difference (TD) error samples, a trigger mechanism that disrupts correlations based on state-feature mutual exclusivity, and worst-case action search within the data manifold guided by Q-network gradients. Evaluated on the D4RL benchmark, the proposed method substantially outperforms existing attacks, maintaining near-optimal policy performance in clean environments while achieving high success rates against mainstream safety-constrained RL algorithms.

Technology Category

Application Category

📝 Abstract
Offline Reinforcement Learning (RL) enables policy optimization from static datasets but is inherently vulnerable to backdoor attacks. Existing attack strategies typically struggle against safety-constrained algorithms (e.g., CQL) due to inefficient random poisoning and the use of easily detectable Out-of-Distribution (OOD) triggers. In this paper, we propose CS-GBA (Critical Sample-based Gradient-guided Backdoor Attack), a novel framework designed to achieve high stealthiness and destructiveness under a strict budget. Leveraging the theoretical insight that samples with high Temporal Difference (TD) errors are pivotal for value function convergence, we introduce an adaptive Critical Sample Selection strategy that concentrates the attack budget on the most influential transitions. To evade OOD detection, we propose a Correlation-Breaking Trigger mechanism that exploits the physical mutual exclusivity of state features (e.g., 95th percentile boundaries) to remain statistically concealed. Furthermore, we replace the conventional label inversion with a Gradient-Guided Action Generation mechanism, which searches for worst-case actions within the data manifold using the victim Q-network's gradient. Empirical results on D4RL benchmarks demonstrate that our method significantly outperforms state-of-the-art baselines, achieving high attack success rates against representative safety-constrained algorithms with a minimal 5% poisoning budget, while maintaining the agent's performance in clean environments.
Problem

Research questions and friction points this paper is trying to address.

Offline Reinforcement Learning
Backdoor Attack
Safety-constrained Algorithms
Out-of-Distribution Detection
Data Poisoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Critical Sample Selection
Gradient-Guided Action Generation
Correlation-Breaking Trigger
Offline Reinforcement Learning
Backdoor Attack
🔎 Similar Papers
No similar papers found.
Y
Yuanjie Zhao
SJTU Paris Elite Institute of Technology, Shanghai Jiao Tong University, Shanghai, China
J
Junnan Qiu
SJTU Paris Elite Institute of Technology, Shanghai Jiao Tong University, Shanghai, China
Yue Ding
Yue Ding
Shanghai Mental Health Center
Neuroscience
Jie Li
Jie Li
IEEEF, Chair Professor in CS, Shanghai Jiao Tong University
Big Data & AIBlockchainNetwork System and SecurityOS.