🤖 AI Summary
In offline safe reinforcement learning, per-timestep cost constraints often yield overly conservative policies or safety violations. This paper proposes a trajectory-level safety guidance paradigm: pre-collected trajectories are binarily classified as “ideal” or “non-ideal,” and the policy is trained to generate the former while avoiding the latter—replacing conventional min-max optimization. Our method comprises three components: (i) trajectory clustering for meaningful partitioning, (ii) a learnable discriminative classifier for robust classification, and (iii) policy-constrained distillation to embed safety preferences. We further establish a theoretical connection to human feedback learning. Evaluated on the DSRL multi-task benchmark, our approach achieves significant improvements in cumulative reward (+12.3% on average) and safety constraint satisfaction rate (+18.7% on average), jointly enhancing performance and safety. To the best of our knowledge, this is the first work to realize offline safe policy learning via trajectory-level classification.
📝 Abstract
Offline safe reinforcement learning (RL) has emerged as a promising approach for learning safe behaviors without engaging in risky online interactions with the environment. Most existing methods in offline safe RL rely on cost constraints at each time step (derived from global cost constraints) and this can result in either overly conservative policies or violation of safety constraints. In this paper, we propose to learn a policy that generates desirable trajectories and avoids undesirable trajectories. To be specific, we first partition the pre-collected dataset of state-action trajectories into desirable and undesirable subsets. Intuitively, the desirable set contains high reward and safe trajectories, and undesirable set contains unsafe trajectories and low-reward safe trajectories. Second, we learn a policy that generates desirable trajectories and avoids undesirable trajectories, where (un)desirability scores are provided by a classifier learnt from the dataset of desirable and undesirable trajectories. This approach bypasses the computational complexity and stability issues of a min-max objective that is employed in existing methods. Theoretically, we also show our approach's strong connections to existing learning paradigms involving human feedback. Finally, we extensively evaluate our method using the DSRL benchmark for offline safe RL. Empirically, our method outperforms competitive baselines, achieving higher rewards and better constraint satisfaction across a wide variety of benchmark tasks.