Diverse, not Short: A Length-Controlled Self-Learning Framework for Improving Response Diversity of Language Models

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Language model response diversity is systematically constrained by mainstream evaluation metrics and reward models’ inherent bias toward shorter outputs. Method: We propose a length-controllable self-training framework to explicitly identify and mitigate this length bias: (i) a length-aware preference data construction paradigm; (ii) length-controlled sampling coupled with triple filtering for diversity, quality, and length; (iii) leveraging small models as “diversity teachers” to enable cross-scale knowledge transfer; and (iv) efficient DPO training using only 3,000 lightweight preference pairs. Results: On LLaMA-3.1-8B and Olmo-2 (7B/13B), our method significantly improves lexical and semantic diversity across four creative generation tasks, while maintaining or even enhancing response quality—demonstrating the first systematic alleviation of length-induced diversity degradation in LLM alignment.

Technology Category

Application Category

📝 Abstract
Diverse language model responses are crucial for creative generation, open-ended tasks, and self-improvement training. We show that common diversity metrics, and even reward models used for preference optimization, systematically bias models toward shorter outputs, limiting expressiveness. To address this, we introduce Diverse, not Short (Diverse-NS), a length-controlled self-learning framework that improves response diversity while maintaining length parity. By generating and filtering preference data that balances diversity, quality, and length, Diverse-NS enables effective training using only 3,000 preference pairs. Applied to LLaMA-3.1-8B and the Olmo-2 family, Diverse-NS substantially enhances lexical and semantic diversity. We show consistent improvement in diversity with minor reduction or gains in response quality on four creative generation tasks: Divergent Associations, Persona Generation, Alternate Uses, and Creative Writing. Surprisingly, experiments with the Olmo-2 model family (7B, and 13B) show that smaller models like Olmo-2-7B can serve as effective"diversity teachers"for larger models. By explicitly addressing length bias, our method efficiently pushes models toward more diverse and expressive outputs.
Problem

Research questions and friction points this paper is trying to address.

Addressing bias toward shorter outputs in diversity metrics
Improving response diversity while maintaining length parity
Enhancing lexical and semantic diversity in language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Length-controlled self-learning framework for diversity
Generates and filters balanced preference data
Smaller models teach diversity to larger models