APLOT: Robust Reward Modeling via Adaptive Preference Learning with Optimal Transport

๐Ÿ“… 2025-10-12
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Bradleyโ€“Terry (BT) reward models suffer from limited discriminative power for semantically similar preference responses, are prone to overfitting on easy samples, and exhibit poor generalization under out-of-distribution (OOD) conditions. To address these issues, we propose an Optimal Transport (OT)-based adaptive margin mechanism: leveraging semantic similarity and predicted reward differences to construct a dynamic cost matrix, it enables margin adaptation within the BT framework, thereby strengthening learning from hard-to-distinguish samples from a distributional perspective. Our method significantly enhances reward model discrimination among similar responses, accelerates convergence, and improves both in-distribution and OOD generalization. Extensive evaluations across multiple benchmark datasets and RL-based alignment tasks demonstrate consistent superiority over existing reward modeling approaches, yielding more accurate and human-aligned preference estimation.

Technology Category

Application Category

๐Ÿ“ Abstract
The reward model (RM) plays a crucial role in aligning Large Language Models (LLMs) with human preferences through Reinforcement Learning, where the Bradley-Terry (BT) objective has been recognized as simple yet powerful, specifically for pairwise preference learning. However, BT-based RMs often struggle to effectively distinguish between similar preference responses, leading to insufficient separation between preferred and non-preferred outputs. Consequently, they may easily overfit easy samples and cannot generalize well to Out-Of-Distribution (OOD) samples, resulting in suboptimal performance. To address these challenges, this paper introduces an effective enhancement to BT-based RMs through an adaptive margin mechanism. Specifically, we design to dynamically adjust the RM focus on more challenging samples through margins, based on both semantic similarity and model-predicted reward differences, which is approached from a distributional perspective solvable with Optimal Transport (OT). By incorporating these factors into a principled OT cost matrix design, our adaptive margin enables the RM to better capture distributional differences between chosen and rejected responses, yielding significant improvements in performance, convergence speed, and generalization capabilities. Experimental results across multiple benchmarks demonstrate that our method outperforms several existing RM techniques, showcasing enhanced performance in both In-Distribution (ID) and OOD settings. Moreover, RLHF experiments support our practical effectiveness in better aligning LLMs with human preferences. Our code is available at https://github.com/BIRlz/APLOT
Problem

Research questions and friction points this paper is trying to address.

Improving reward models' ability to distinguish similar preference responses
Addressing overfitting on easy samples and poor OOD generalization
Enhancing separation between preferred and non-preferred LLM outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive margin mechanism using Optimal Transport
Dynamic focus adjustment based on semantic similarity
Improved reward model generalization through distributional differences
๐Ÿ”Ž Similar Papers
No similar papers found.
Z
Zhuo Li
Shenzhen International Center for Industrial and Applied Mathematics, Shenzhen Research Institute of Big Data, The Chinese University of Hong Kong, Shenzhen
Y
Yuege Feng
Birmingham City University
D
Dandan Guo
Jilin University, KAUST
Jinpeng Hu
Jinpeng Hu
Hefei University of Technology
natural language processingnamed entity recognitionsummarization
A
Anningzhe Gao
Shenzhen International Center for Industrial and Applied Mathematics
Xiang Wan
Xiang Wan
Shenzhen Research Institute of Big Data
BioinformaticsData MiningBig Data Analysis