CrowdSelect: Synthetic Instruction Data Selection with Multi-LLM Wisdom

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In instruction tuning of small language models, existing synthetic data selection methods suffer from inefficiency and overreliance on single quality signals (e.g., reward scores or perplexity), leading to insufficient domain coverage and response diversity. Method: We propose CrowdSelect—a diversity-aware data selection framework that jointly leverages multi-LLM consensus and reward model evaluation. It introduces three foundational metrics—semantic coverage, logical rigor, and expressive diversity—based on cross-model response consistency, integrated via hierarchical clustering to jointly optimize both quality and diversity. The method supports feature modeling of instruction-response pairs and is compatible with both full-parameter and LoRA fine-tuning. Results: CrowdSelect achieves state-of-the-art performance on MT-Bench and Arena-Hard, improving fine-tuned Llama-3.2-3B-Instruct by +11.1% and +4.81%, respectively, outperforming all prior approaches.

Technology Category

Application Category

📝 Abstract
Distilling advanced Large Language Models' instruction-following capabilities into smaller models using a selected subset has become a mainstream approach in model training. While existing synthetic instruction data selection strategies rely mainly on single-dimensional signals (i.e., reward scores, model perplexity), they fail to capture the complexity of instruction-following across diverse fields. Therefore, we investigate more diverse signals to capture comprehensive instruction-response pair characteristics and propose three foundational metrics that leverage Multi-LLM wisdom, informed by (1) diverse LLM responses and (2) reward model assessment. Building upon base metrics, we propose CrowdSelect, an integrated metric incorporating a clustering-based approach to maintain response diversity. Our comprehensive experiments demonstrate that our foundation metrics consistently improve performance across 4 base models on MT-bench and Arena-Hard. CrowdSelect, efficiently incorporating all metrics, achieves state-of-the-art performance in both Full and LoRA fine-tuning, showing improvements of 4.81% on Arena-Hard and 11.1% on MT-bench with Llama-3.2-3b-instruct. We hope our findings will bring valuable insights for future research in this direction. Code are available at https://github.com/listentm/crowdselect.
Problem

Research questions and friction points this paper is trying to address.

Enhance instruction data selection
Leverage Multi-LLM wisdom
Improve model fine-tuning performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-LLM wisdom for data selection
Clustering-based response diversity maintenance
Integrated metric for model fine-tuning
🔎 Similar Papers
No similar papers found.
Y
Yisen Li
Huazhong University of Science and Technology
Lingfeng Yang
Lingfeng Yang
Huazhong University of Science and Technology
W
Wenxuan Shen
South China University of Technology
P
Pan Zhou
Huazhong University of Science and Technology
Yao Wan
Yao Wan
Huazhong University of Science and Technology
NLPProgramming LanguagesSoftware EngineeringLarge Language Models
Weiwei Lin
Weiwei Lin
School of Physics, Southeast University
Condensed matter physicsmaterial sciencenanotechnologymagnetismspintronics
D
Dongping Chen
Huazhong University of Science and Technology