🤖 AI Summary
This work addresses key limitations in existing preference learning methods for large language models—namely, marginal performance gains, high computational costs, hyperparameter sensitivity, and insufficient modeling of global semantic relationships among tokens. To overcome these issues, the study introduces optimal transport theory into preference learning for the first time, proposing a token-embedding-based optimal transport loss that simultaneously preserves the model’s original output distribution and captures global semantic structure during fine-tuning alignment. Evaluated across seven preference tasks spanning human values and logical reasoning, the method significantly improves alignment performance while effectively maintaining text fluency and coherence, demonstrating superior stability, robustness, and semantic modeling capability.
📝 Abstract
Preference learning in Large Language Models (LLMs) has advanced significantly, yet existing methods remain limited by modest performance gains, high computational costs, hyperparameter sensitivity, and insufficient modeling of global token-level relationships. We introduce PLOT, which enhances Preference Learning in fine-tuning-based alignment through a token-level loss derived from Optimal Transport. By formulating preference learning as an Optimal Transport Problem, PLOT aligns model outputs with human preferences while preserving the original distribution of LLMs, ensuring stability and robustness. Furthermore, PLOT leverages token embeddings to capture semantic relationships, enabling globally informed optimization. Experiments across two preference categories - Human Values and Logic & Problem Solving - spanning seven subpreferences demonstrate that PLOT consistently improves alignment performance while maintaining fluency and coherence. These results substantiate optimal transport as a principled methodology for preference learning, establishing a theoretically grounded framework that provides new insights for preference learning of LLMs.