Capturing Nuanced Preferences: Preference-Aligned Distillation for Small Language Models

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing preference distillation methods rely on pairwise response comparisons, neglecting variations in preference strength and thus hindering small models’ ability to capture fine-grained human preferences. Method: We propose Preference Alignment Distillation (PAD), the first framework that treats language models as implicit reward functions and models teacher-student preference probability distributions over the full response space. PAD achieves end-to-end distribution alignment via KL divergence minimization, integrating high-temperature sampling with LM-based intrinsic reward estimation—breaking away from conventional pairwise comparison paradigms. Contribution/Results: PAD achieves over 20% improvement on AlpacaEval 2 and Arena-Hard. Notably, Gemma-series student models surpass their teachers on MT-Bench, outperforming state-of-the-art alignment methods by a significant margin.

Technology Category

Application Category

📝 Abstract
Aligning small language models (SLMs) with human values typically involves distilling preference knowledge from large language models (LLMs). However, existing distillation methods model preference knowledge in teacher LLMs by comparing pairwise responses, overlooking the extent of difference between responses. This limitation hinders student SLMs from capturing the nuanced preferences for multiple responses. In this paper, we propose a Preference-Aligned Distillation (PAD) framework, which models teacher's preference knowledge as a probability distribution over all potential preferences, thereby providing more nuanced supervisory signals. Our insight in developing PAD is rooted in the demonstration that language models can serve as reward functions, reflecting their intrinsic preferences. Based on this, PAD comprises three key steps: (1) sampling diverse responses using high-temperature; (2) computing rewards for both teacher and student to construct their intrinsic preference; and (3) training the student's intrinsic preference distribution to align with the teacher's. Experiments on four mainstream alignment benchmarks demonstrate that PAD consistently and significantly outperforms existing approaches, achieving over 20% improvement on AlpacaEval 2 and Arena-Hard, indicating superior alignment with human preferences. Notably, on MT-Bench, using the extsc{Gemma} model family, the student trained by PAD surpasses its teacher, further validating the effectiveness of our PAD.
Problem

Research questions and friction points this paper is trying to address.

Aligning small language models with human preferences
Capturing nuanced preferences in response differences
Improving distillation methods for preference knowledge transfer
Innovation

Methods, ideas, or system contributions that make the work stand out.

Models preference as probability distribution
Uses high-temperature for diverse sampling
Aligns student's preference with teacher's
🔎 Similar Papers
No similar papers found.