AMaPO: Adaptive Margin-attached Preference Optimization for Language Model Alignment

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current language model preference optimization suffers from an “overfitting-underfitting dilemma”: fixed margins induce gradient redundancy on correctly ranked pairs (overfitting) while providing insufficient correction for incorrectly ranked pairs (underfitting). To address this, we propose AMaPO—Adaptive Margin Preference Optimization—which formally characterizes this dilemma and introduces the first instance-level adaptive margin mechanism. Leveraging Z-score normalization and exponential scaling, AMaPO dynamically adjusts the margin in the Bradley–Terry loss to intelligently redistribute gradients. Integrated into offline preference optimization frameworks, it requires no additional annotations or online interaction. On standard benchmarks, AMaPO consistently improves ranking accuracy and downstream task alignment. Ablation studies and empirical analysis confirm that it effectively alleviates gradient allocation imbalance, enhancing both training stability and generalization performance.

Technology Category

Application Category

📝 Abstract
Offline preference optimization offers a simpler and more stable alternative to RLHF for aligning language models. However, their effectiveness is critically dependent on ranking accuracy, a metric where further gains are highly impactful. This limitation arises from a fundamental problem that we identify and formalize as the Overfitting-Underfitting Dilemma: current margin designs cause models to apply excessive, wasteful gradients to correctly ranked samples (overfitting) while providing insufficient corrective signals for misranked ones (underfitting). To resolve this dilemma, we propose Adaptive Margin-attached Preference Optimization (AMaPO), a simple yet principled algorithm. AMaPO employs an instance-wise adaptive margin, refined by Z-normalization and exponential scaling, which dynamically reallocates learning effort by amplifying gradients for misranked samples and suppressing them for correct ones. Extensive experiments on widely used benchmarks demonstrate that AMaPO not only achieves better ranking accuracy and superior downstream alignment performance, but targeted analysis also confirms that it successfully mitigates the core overfitting and underfitting issues.
Problem

Research questions and friction points this paper is trying to address.

Current preference optimization methods suffer from overfitting-underfitting dilemma in ranking
Fixed margin designs cause wasteful gradients on correct samples and insufficient correction on misranked ones
Existing approaches critically depend on ranking accuracy with limited improvement gains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive margin dynamically reallocates learning effort
Z-normalization and exponential scaling refine margins
Amplifies gradients for misranked samples while suppressing correct ones
🔎 Similar Papers
2024-06-05arXiv.orgCitations: 1
R
Ruibo Deng
Sichuan University, Chengdu, China
Duanyu Feng
Duanyu Feng
Sichuan University
Machine learningNumerical optimizationNature language processing
W
Wenqiang Lei
Sichuan University, Chengdu, China