🤖 AI Summary
This work addresses the challenge that existing offline safe reinforcement learning methods typically train policies under a fixed cost threshold and struggle to adapt zero-shot to varying cumulative cost constraints during deployment. To overcome this limitation, the paper introduces conditional sequence modeling (CSM) into offline safe RL for the first time and proposes the RCDT method. RCDT integrates Lagrangian-style cost penalties with adaptive penalty coefficients within a single policy, complemented by a reward-cost-aware trajectory reweighting mechanism and Q-value regularization. This design effectively balances return and cost trade-offs while avoiding excessive conservatism. Evaluated on the DSRL benchmark, RCDT consistently outperforms state-of-the-art methods across diverse cost thresholds, demonstrating superior zero-shot adaptability and advancing the field of offline safe reinforcement learning.
📝 Abstract
Offline safe reinforcement learning (RL) aims to learn policies from a fixed dataset while maximizing performance under cumulative cost constraints. In practice, deployment requirements often vary across scenarios, necessitating a single policy that can adapt zero-shot to different cost thresholds. However, most existing offline safe RL methods are trained under a pre-specified threshold, yielding policies with limited generalization and deployment flexibility across cost thresholds. Motivated by recent progress in conditional sequence modeling (CSM), which enables flexible goal-conditioned control by specifying target returns, we propose RCDT, a CSM-based method that supports zero-shot deployment across multiple cost thresholds within a single trained policy. RCDT is the first CSM-based offline safe RL algorithm that integrates a Lagrangian-style cost penalty with an auto-adaptive penalty coefficient. To avoid overly conservative behavior and achieve a more favorable return--cost trade-off, a reward--cost-aware trajectory reweighting mechanism and Q-value regularization are further incorporated. Extensive experiments on the DSRL benchmark demonstrate that RCDT consistently improves return--cost trade-offs over representative baselines, advancing the state-of-the-art in offline safe RL.