🤖 AI Summary
Existing preference optimization methods (e.g., DPO) exhibit poor generalization in multilingual settings and struggle with noisy data and low-margin preference pairs. To address this, we propose a confidence-aware preference optimization framework tailored for multilingual alignment: it estimates sample ranking confidence via relative reward modeling and introduces a dynamic loss scaling mechanism to adaptively modulate learning weights, thereby enhancing model robustness against low-quality or ambiguous preference signals. This work presents the first fine-grained, confidence-driven preference learning approach specifically designed for multilingual response ranking. Experiments demonstrate that our method improves reward accuracy by ≥16% over state-of-the-art baselines, significantly widens the generation gap between preferred and non-preferred responses, and achieves substantial gains in cross-lingual alignment performance.
📝 Abstract
Preference optimization is a critical post-training technique used to align large language models (LLMs) with human preferences, typically by fine-tuning on ranked response pairs. While methods like Direct Preference Optimization (DPO) have proven effective in English, they often fail to generalize robustly to multilingual settings. We propose a simple yet effective alternative, Confidence-Aware Preference Optimization (CAPO), which replaces DPO's fixed treatment of preference pairs with a dynamic loss scaling mechanism based on a relative reward. By modulating the learning signal according to the confidence in each preference pair, CAPO enhances robustness to noisy or low-margin comparisons, typically encountered in multilingual text. Empirically, CAPO outperforms existing preference optimization baselines by at least 16% in reward accuracy, and improves alignment by widening the gap between preferred and dispreferred responses across languages.