🤖 AI Summary
To address the challenges of preference modeling and poor generalization in non-verifiable tasks (e.g., human preference learning), this paper proposes Dual-Weighted Reinforcement Learning (DWRL). DWRL integrates chain-of-thought (CoT) reasoning with the Bradley–Terry model, introducing instance-level mismatch weights to mitigate annotation noise and group-level conditional preference scores to preserve inductive bias—thereby enabling reasoning-augmented training of generative preference models. Its self-normalized weighting mechanism jointly optimizes multi-step reasoning and preference prediction. Extensive experiments across multiple models (e.g., Llama3, Qwen2.5) and benchmarks demonstrate that DWRL significantly improves preference prediction accuracy while generating more coherent and interpretable reasoning chains. Notably, it achieves the first empirically validated co-improvement of reasoning quality and preference modeling performance.
📝 Abstract
Reinforcement learning (RL) has recently proven effective at scaling chain-of-thought (CoT) reasoning in large language models on tasks with verifiable answers. However, extending RL to more general non-verifiable tasks, typically in the format of human preference pairs, remains both challenging and underexplored. In this work, we propose Dual-Weighted Reinforcement Learning (DWRL), a new framework for preference modeling that integrates CoT reasoning with the Bradley-Terry (BT) model via a dual-weighted RL objective that preserves preference-modeling inductive bias. DWRL approximates the maximum-likelihood objective of the BT model with two complementary weights: an instance-wise misalignment weight, which emphasizes under-trained pairs misaligned with human preference, and a group-wise (self-normalized) conditional preference score, which promotes promising thoughts. In this paper, we apply DWRL to preference modeling by training generative preference models (GPMs) to first generate a thought and then predict the human preference score. Across multiple benchmarks and model scales (Llama3 and Qwen2.5), DWRL consistently outperforms both GPM baselines and scalar models, while producing coherent, interpretable thoughts. In summary, our results position DWRL as a general framework for reasoning-enhanced preference learning beyond verifiable tasks.