🤖 AI Summary
Existing alignment methods (e.g., DPO) model response preferences only conditional on given instructions, neglecting the joint instruction-response distribution—leading to incomplete preference representations and limited generalization. This work proposes Joint Preference Optimization (JPO), the first method to formulate preference learning at the instruction-response *pair* level. JPO introduces a joint probability ratio objective that directly optimizes the discrepancy between the joint distributions of preferred and rejected instruction-response pairs. Integrating supervised fine-tuning with preference optimization, JPO is trained on human-annotated, pair-level preference data. On summarization and open-domain dialogue tasks, JPO achieves +5.2% and +3.3% win rates over DPO, respectively. Empirical results demonstrate significantly improved robustness to instruction variations and enhanced cross-task generalization capability.
📝 Abstract
A common technique for aligning large language models (LLMs) relies on acquiring human preferences by comparing multiple generations conditioned on a fixed context. This method, however, relies solely on pairwise comparisons, where the generations are evaluated within an identical context. While effective to such conditional preferences often fail to encompass the nuanced and multidimensional nature of human preferences. In this work, we revisit the traditional paradigm of preference acquisition and propose a new axis based on eliciting preferences jointly over the instruction-response pairs. Unlike prior preference optimizations, which are designed for conditional ranking protocols (e.g., DPO), we propose Joint Preference Optimization (JPO), a new preference optimization objective that upweights the joint probability of the chosen instruction-response pair over the rejected instruction-response pair. Interestingly, LLMs trained with joint instruction-response preference data using JPO outperform LLM trained with DPO by $5.2%$ and $3.3%$ win-rate for summarization and open-ended dialogue datasets, respectively. Our findings reveal that joint preferences over instruction and response pairs can significantly enhance the alignment of LLMs by tapping into a broader spectrum of human preference elicitation. The data and code is available at https://github.com/Hritikbansal/dove.