REBEL: Reward Regularization-Based Approach for Robotic Reinforcement Learning from Human Feedback

📅 2023-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address reward hacking and distributional shift arising from misalignment between hand-crafted reward functions and human preferences (e.g., fairness, safety) in reinforcement learning, this paper proposes a novel RLHF paradigm based on reward regularization. We introduce the concept of “agent preference” for the first time, formulating reward learning as a bilevel optimization problem that jointly models human feedback and the agent’s intrinsic preferences. A differentiable reward regularization mechanism enables human–agent co-alignment, with theoretical guarantees on robustness. Evaluated on the DeepMind Control Suite—a standard benchmark for continuous control—our method significantly improves policy alignment quality and training stability, effectively mitigates reward hacking, and outperforms existing RLHF approaches in both sample efficiency and behavioral fidelity to true human preferences.
📝 Abstract
The effectiveness of reinforcement learning (RL) agents in continuous control robotics tasks is mainly dependent on the design of the underlying reward function, which is highly prone to reward hacking. A misalignment between the reward function and underlying human preferences (values, social norms) can lead to catastrophic outcomes in the real world especially in the context of robotics for critical decision making. Recent methods aim to mitigate misalignment by learning reward functions from human preferences and subsequently performing policy optimization. However, these methods inadvertently introduce a distribution shift during reward learning due to ignoring the dependence of agent-generated trajectories on the reward learning objective, ultimately resulting in sub-optimal alignment. Hence, in this work, we address this challenge by advocating for the adoption of regularized reward functions that more accurately mirror the intended behaviors of the agent. We propose a novel concept of reward regularization within the robotic RLHF (RL from Human Feedback) framework, which we refer to as emph{agent preferences}. Our approach uniquely incorporates not just human feedback in the form of preferences but also considers the preferences of the RL agent itself during the reward function learning process. This dual consideration significantly mitigates the issue of distribution shift in RLHF with a computationally tractable algorithm. We provide a theoretical justification for the proposed algorithm by formulating the robotic RLHF problem as a bilevel optimization problem and developing a computationally tractable version of the same. We demonstrate the efficiency of our algorithm {ours} in several continuous control benchmarks in DeepMind Control Suite cite{tassa2018deepmind}.
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning
Reward Function Optimization
Human Preferences Integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

REBEL method
Proxy Preference Reward Adjustment
Bias Mitigation in Reward Learning
🔎 Similar Papers
No similar papers found.