🤖 AI Summary
In real-world RLHF, noisy human feedback degrades policy stability and generalization, particularly distorting advantage estimation by corrupting semantically critical signals. To address this, we propose a value-model-centric robust training framework that—uniquely—leverages the value model’s active role in noise suppression. Our method introduces an auxiliary loss grounded in language model entropy and perplexity, coupled with a variational information bottleneck to selectively encode discriminative semantic features. Integrated into the PPO framework, it freezes the language model to provide reliable uncertainty estimates, thereby improving value function estimation. Evaluated on mathematical reasoning, scientific question answering, and multi-turn dialogue tasks, our approach significantly outperforms PPO and GRPO baselines under both rule-injected and model-generated noisy reward settings. Results validate the effectiveness and generalizability of value-model-driven denoising as a novel paradigm for robust RLHF.
📝 Abstract
Reinforcement Learning from Human Feedback (RLHF) often suffers from noisy or imperfect reward supervision in real-world settings, which undermines policy stability and generalization. Such noise may cause models to lose attention on key words during advantage estimation. While prior work focuses on reward denoising or filtering poor data, it often overlooks the critical role of the value model in policy optimization. In this work, we show that a strong value model is essential for mitigating noise by absorbing unstable signals and enabling more reliable advantage estimation. We propose VRPO, a value-centric framework for robust PPO training under noisy supervision. VRPO combines two core designs: (1) an auxiliary loss guided by entropy and perplexity from a frozen language model, and (2) a variational information bottleneck. These mechanisms enhance the value model's ability to filter out noise and capture key words from the context during advantage estimation, transforming it from a passive predictor into an active regulator of noise. Experiments on math reasoning, science QA, and multi-turn dialogue, under both rule-based and model-based noisy rewards, show that VRPO consistently outperforms PPO and GRPO baselines. Our findings underscore the often-overlooked importance of the value model in RLHF and offer a principled and practical approach to robust policy optimization in noisy real-world environments.