🤖 AI Summary
Direct Preference Optimization (DPO) lacks a principled theoretical foundation for its log-ratio reward formulation. Method: We reformulate preference optimization from an information-theoretic perspective, introducing the Differential Information Distribution (DID) to quantify information gain during policy updates, thereby unifying preference data, the DPO objective, and policy behavior within a coherent information-theoretic framework. We uncover DPO’s implicit inductive bias toward log-margin-ordered policies and prove the theoretical necessity of the log-ratio reward under information optimality. Furthermore, we characterize policy reinforcement and smoothing via DID entropy and derive the optimal rejection-sampling distribution for responses, explaining the observed log-likelihood shift phenomenon. Results: Experiments demonstrate that high-entropy DID enhances general instruction-following capability, whereas low-entropy DID improves performance on knowledge-intensive question answering.
📝 Abstract
Direct Preference Optimization (DPO) has become a standard technique for aligning language models with human preferences in a supervised manner. Despite its empirical success, the theoretical justification behind its log-ratio reward parameterization remains incomplete. In this work, we address this gap by utilizing the Differential Information Distribution (DID): a distribution over token sequences that captures the information gained during policy updates. First, we show that when preference labels encode the differential information required to transform a reference policy into a target policy, the log-ratio reward in DPO emerges as the uniquely optimal form for learning the target policy via preference optimization. This result naturally yields a closed-form expression for the optimal sampling distribution over rejected responses. Second, we find that the condition for preferences to encode differential information is fundamentally linked to an implicit assumption regarding log-margin ordered policies-an inductive bias widely used in preference optimization yet previously unrecognized. Finally, by analyzing the entropy of the DID, we characterize how learning low-entropy differential information reinforces the policy distribution, while high-entropy differential information induces a smoothing effect, which explains the log-likelihood displacement phenomenon. We validate our theoretical findings in synthetic experiments and extend them to real-world instruction-following datasets. Our results suggest that learning high-entropy differential information is crucial for general instruction-following, while learning low-entropy differential information benefits knowledge-intensive question answering. Overall, our work presents a unifying perspective on the DPO objective, the structure of preference data, and resulting policy behaviors through the lens of differential information.