Conditional Sequence Modeling for Safe Reinforcement Learning

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that existing offline safe reinforcement learning methods typically train policies under a fixed cost threshold and struggle to adapt zero-shot to varying cumulative cost constraints during deployment. To overcome this limitation, the paper introduces conditional sequence modeling (CSM) into offline safe RL for the first time and proposes the RCDT method. RCDT integrates Lagrangian-style cost penalties with adaptive penalty coefficients within a single policy, complemented by a reward-cost-aware trajectory reweighting mechanism and Q-value regularization. This design effectively balances return and cost trade-offs while avoiding excessive conservatism. Evaluated on the DSRL benchmark, RCDT consistently outperforms state-of-the-art methods across diverse cost thresholds, demonstrating superior zero-shot adaptability and advancing the field of offline safe reinforcement learning.

Technology Category

Application Category

📝 Abstract
Offline safe reinforcement learning (RL) aims to learn policies from a fixed dataset while maximizing performance under cumulative cost constraints. In practice, deployment requirements often vary across scenarios, necessitating a single policy that can adapt zero-shot to different cost thresholds. However, most existing offline safe RL methods are trained under a pre-specified threshold, yielding policies with limited generalization and deployment flexibility across cost thresholds. Motivated by recent progress in conditional sequence modeling (CSM), which enables flexible goal-conditioned control by specifying target returns, we propose RCDT, a CSM-based method that supports zero-shot deployment across multiple cost thresholds within a single trained policy. RCDT is the first CSM-based offline safe RL algorithm that integrates a Lagrangian-style cost penalty with an auto-adaptive penalty coefficient. To avoid overly conservative behavior and achieve a more favorable return--cost trade-off, a reward--cost-aware trajectory reweighting mechanism and Q-value regularization are further incorporated. Extensive experiments on the DSRL benchmark demonstrate that RCDT consistently improves return--cost trade-offs over representative baselines, advancing the state-of-the-art in offline safe RL.
Problem

Research questions and friction points this paper is trying to address.

offline safe reinforcement learning
cost constraints
zero-shot adaptation
generalization
deployment flexibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conditional Sequence Modeling
Offline Safe Reinforcement Learning
Zero-shot Adaptation
Lagrangian Penalty
Trajectory Reweighting
🔎 Similar Papers
No similar papers found.
W
Wensong Bai
College of Computer Science and Technology, Zhejiang University, Hangzhou, China
Chao Zhang
Chao Zhang
Zhejiang University
machine learning
Q
Qihang Xu
College of Computer Science and Technology, Zhejiang University, Hangzhou, China
C
Chufan Chen
College of Computer Science and Technology, Zhejiang University, Hangzhou, China
C
Chenhao Zhou
College of Computer Science and Technology, Zhejiang University, Hangzhou, China
Hui Qian
Hui Qian
College of CS, Zhejiang University
Artificial Intelligence