Offline Safe Policy Optimization From Heterogeneous Feedback

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Offline safe reinforcement learning suffers from error accumulation in reward/cost models, leading to failure in constrained optimization—particularly in long-horizon continuous-control tasks. This paper proposes PreSa, the first framework to jointly align preference and safety via end-to-end learning from heterogeneous human feedback: behavior preference pairs and binary safety labels on trajectory segments. PreSa bypasses explicit reward/cost modeling and avoids the conventional two-stage paradigm, instead directly optimizing for high-reward, low-constraint-violation policies. It employs model-free Lagrangian dual optimization for safe policy training. Evaluated across multiple continuous-control benchmarks, PreSa outperforms state-of-the-art offline safe RL methods, achieving a 12.7% improvement in reward and a 63% reduction in constraint violation rate—surpassing even baselines trained with ground-truth reward/cost labels.

Technology Category

Application Category

📝 Abstract
Offline Preference-based Reinforcement Learning (PbRL) learns rewards and policies aligned with human preferences without the need for extensive reward engineering and direct interaction with human annotators. However, ensuring safety remains a critical challenge across many domains and tasks. Previous works on safe RL from human feedback (RLHF) first learn reward and cost models from offline data, then use constrained RL to optimize a safe policy. While such an approach works in the contextual bandits settings (LLMs), in long horizon continuous control tasks, errors in rewards and costs accumulate, leading to impairment in performance when used with constrained RL methods. To address these challenges, (a) instead of indirectly learning policies (from rewards and costs), we introduce a framework that learns a policy directly based on pairwise preferences regarding the agent's behavior in terms of rewards, as well as binary labels indicating the safety of trajectory segments; (b) we propose extsc{PreSa} (Preference and Safety Alignment), a method that combines preference learning module with safety alignment in a constrained optimization problem. This optimization problem is solved within a Lagrangian paradigm that directly learns reward-maximizing safe policy extit{without explicitly learning reward and cost models}, avoiding the need for constrained RL; (c) we evaluate our approach on continuous control tasks with both synthetic and real human feedback. Empirically, our method successfully learns safe policies with high rewards, outperforming state-of-the-art baselines, and offline safe RL approaches with ground-truth reward and cost.
Problem

Research questions and friction points this paper is trying to address.

Optimizes safe policies from offline heterogeneous human feedback
Addresses error accumulation in long-horizon continuous control tasks
Learns safe policies directly without explicit reward or cost models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Direct policy learning from preferences and safety labels
Lagrangian optimization without explicit reward-cost models
Outperforms baselines in continuous control tasks
🔎 Similar Papers
No similar papers found.