🤖 AI Summary
This paper identifies a critical non-monotonicity flaw in comparative preference learning methods for AI alignment (e.g., DPO, GPO, GBT): when humans prefer outcome *y* over *z* (*y* ≻ *z*), models may paradoxically assign lower probability or reward to *y*.
Method: We introduce “local pairwise monotonicity” — the first formal definition of monotonicity tailored to preference learning — and systematically formalize multiple monotonicity variants. Leveraging a generalized preference learning framework, we integrate probabilistic modeling with reward-structure analysis to derive verifiable sufficient conditions and construct a theoretical toolkit for assessing monotonic robustness.
Results: Under mild assumptions, we rigorously prove that existing methods retain local monotonicity, while precisely characterizing their failure boundaries. Our analysis provides foundational theoretical guidance and practical constraints for designing more trustworthy, interpretable preference learning algorithms.
📝 Abstract
Comparison-based preference learning has become central to the alignment of AI models with human preferences. However, these methods may behave counterintuitively. After empirically observing that, when accounting for a preference for response $y$ over $z$, the model may actually decrease the probability (and reward) of generating $y$ (an observation also made by others), this paper investigates the root causes of (non) monotonicity, for a general comparison-based preference learning framework that subsumes Direct Preference Optimization (DPO), Generalized Preference Optimization (GPO) and Generalized Bradley-Terry (GBT). Under mild assumptions, we prove that such methods still satisfy what we call local pairwise monotonicity. We also provide a bouquet of formalizations of monotonicity, and identify sufficient conditions for their guarantee, thereby providing a toolbox to evaluate how prone learning models are to monotonicity violations. These results clarify the limitations of current methods and provide guidance for developing more trustworthy preference learning algorithms.