Diverse Preference Optimization

📅 2025-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Post-training of large language models (e.g., RLHF, preference optimization, supervised fine-tuning) often sharpens the output distribution, severely degrading response diversity and hindering creative generation tasks. To address this, we propose DivPO, an online preference optimization framework that introduces, for the first time, a dynamic response-pool-based diversity metric and a rare-high-quality sample selection strategy to jointly optimize diversity and quality. DivPO integrates diversity-aware sampling, incremental response pool construction, and rarity-weighted preference comparison. Experiments demonstrate that DivPO maintains generation quality on par with baseline methods while substantially improving persona attribute diversity by 45.6% and story diversity by 74.6%, effectively mitigating the diversity collapse problem inherent in post-training.

Technology Category

Application Category

📝 Abstract
Post-training of language models, either through reinforcement learning, preference optimization or supervised finetuning, tends to sharpen the output probability distribution and reduce the diversity of generated responses. This is particularly a problem for creative generative tasks where varied responses are desired. %This impacts the ability to generate high quality synthetic data which is becoming a vital component of model training. In this work we introduce Diverse Preference Optimization (DivPO), an online optimization method which learns to generate much more diverse responses than standard pipelines, while maintaining the quality of the generations. In DivPO, preference pairs are selected by first considering a pool of responses, and a measure of diversity among them, and selecting chosen examples as being more rare but high quality, while rejected examples are more common, but low quality. DivPO results in generating 45.6% more diverse persona attributes, and an 74.6% increase in story diversity, while maintaining similar win rates as standard baselines.
Problem

Research questions and friction points this paper is trying to address.

Enhance response diversity in language models
Maintain quality while optimizing diversity
Apply Diverse Preference Optimization in creative tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

online optimization method
diverse response generation
maintains generation quality
🔎 Similar Papers
No similar papers found.