Real-Time Aligned Reward Model beyond Semantics

📅 2026-01-30
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in reinforcement learning where continuous policy distribution shifts often lead to reward model over-optimization and alignment failure. To mitigate this, the paper proposes the R2M framework, which introduces a policy feedback mechanism into the reinforcement learning from human feedback (RLHF) pipeline. For the first time, R2M leverages real-time hidden states from the policy model to dynamically adjust the reward model, enabling online alignment between the two. Unlike conventional approaches that rely on static, pre-trained semantic representations, R2M employs a lightweight architecture and dynamic alignment strategy to significantly alleviate reward misalignment caused by distributional shifts. This approach effectively suppresses over-optimization and enhances both the stability and alignment performance of the reward model during policy updates.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning from Human Feedback (RLHF) is a pivotal technique for aligning large language models (LLMs) with human preferences, yet it is susceptible to reward overoptimization, in which policy models overfit to the reward model, exploit spurious reward patterns instead of faithfully capturing human intent. Prior mitigations primarily relies on surface semantic information and fails to efficiently address the misalignment between the reward model (RM) and the policy model caused by continuous policy distribution shifts. This inevitably leads to an increasing reward discrepancy, exacerbating reward overoptimization. To address these limitations, we introduce R2M (Real-Time Aligned Reward Model), a novel lightweight RLHF framework. R2M goes beyond vanilla reward models that solely depend on the semantic representations of a pretrained LLM. Instead, it leverages the evolving hidden states of the policy (namely policy feedback) to align with the real-time distribution shift of the policy during the RL process. This work points to a promising new direction for improving the performance of reward models through real-time utilization of feedback from policy models.
Problem

Research questions and friction points this paper is trying to address.

reward overoptimization
distribution shift
reward model alignment
Reinforcement Learning from Human Feedback
policy misalignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Real-Time Alignment
Reward Overoptimization
Policy Feedback
Distribution Shift
RLHF
🔎 Similar Papers
No similar papers found.