CAPO: Confidence Aware Preference Optimization Learning for Multilingual Preferences

📅 2025-11-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing preference optimization methods (e.g., DPO) exhibit poor generalization in multilingual settings and struggle with noisy data and low-margin preference pairs. To address this, we propose a confidence-aware preference optimization framework tailored for multilingual alignment: it estimates sample ranking confidence via relative reward modeling and introduces a dynamic loss scaling mechanism to adaptively modulate learning weights, thereby enhancing model robustness against low-quality or ambiguous preference signals. This work presents the first fine-grained, confidence-driven preference learning approach specifically designed for multilingual response ranking. Experiments demonstrate that our method improves reward accuracy by ≥16% over state-of-the-art baselines, significantly widens the generation gap between preferred and non-preferred responses, and achieves substantial gains in cross-lingual alignment performance.

Technology Category

Application Category

📝 Abstract
Preference optimization is a critical post-training technique used to align large language models (LLMs) with human preferences, typically by fine-tuning on ranked response pairs. While methods like Direct Preference Optimization (DPO) have proven effective in English, they often fail to generalize robustly to multilingual settings. We propose a simple yet effective alternative, Confidence-Aware Preference Optimization (CAPO), which replaces DPO's fixed treatment of preference pairs with a dynamic loss scaling mechanism based on a relative reward. By modulating the learning signal according to the confidence in each preference pair, CAPO enhances robustness to noisy or low-margin comparisons, typically encountered in multilingual text. Empirically, CAPO outperforms existing preference optimization baselines by at least 16% in reward accuracy, and improves alignment by widening the gap between preferred and dispreferred responses across languages.
Problem

Research questions and friction points this paper is trying to address.

Optimizing multilingual preference alignment in language models
Addressing noisy preference pairs in cross-lingual settings
Improving reward accuracy for preferred versus dispreferred responses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic loss scaling based on relative reward
Modulates learning signal using confidence levels
Enhances robustness in multilingual preference optimization
🔎 Similar Papers
No similar papers found.