🤖 AI Summary
Offline safe reinforcement learning (OSRL) faces two key challenges: (1) existing methods guarantee only short-horizon safety, failing to ensure persistent constraint satisfaction over long deployment horizons; and (2) they exhibit poor generalization and low sample efficiency under out-of-distribution (OOD) states or actions. This paper proposes the first feasibility-aware pessimistic estimation framework, which innovatively integrates Hamilton–Jacobi reachability analysis—providing theoretically verifiable long-horizon safety guarantees—with a conditional variational autoencoder (CVAE) for robust OOD modeling. The framework jointly trains a pessimistic Q-function and a safety classifier. Evaluated on the DSRL benchmark, our method reduces constraint violation rates by 42% over state-of-the-art approaches, while simultaneously achieving high sample efficiency and competitive policy performance.
📝 Abstract
Offline safe reinforcement learning(OSRL) derives constraint-satisfying policies from pre-collected datasets, offers a promising avenue for deploying RL in safety-critical real-world domains such as robotics. However, the majority of existing approaches emphasize only short-term safety, neglecting long-horizon considerations. Consequently, they may violate safety constraints and fail to ensure sustained protection during online deployment. Moreover, the learned policies often struggle to handle states and actions that are not present or out-of-distribution(OOD) from the offline dataset, and exhibit limited sample efficiency. To address these challenges, we propose a novel framework Feasibility-Aware offline Safe Reinforcement Learning with CVAE-based Pessimism (FASP). First, we employ Hamilton-Jacobi (H-J) reachability analysis to generate reliable safety labels, which serve as supervisory signals for training both a conditional variational autoencoder (CVAE) and a safety classifier. This approach not only ensures high sampling efficiency but also provides rigorous long-horizon safety guarantees. Furthermore, we utilize pessimistic estimation methods to estimate the Q-value of reward and cost, which mitigates the extrapolation errors induces by OOD actions, and penalize unsafe actions to enabled the agent to proactively avoid high-risk behaviors. Moreover, we theoretically prove the validity of this pessimistic estimation. Extensive experiments on DSRL benchmarks demonstrate that FASP algorithm achieves competitive performance across multiple experimental tasks, particularly outperforming state-of-the-art algorithms in terms of safety.