SCAR: Data Selection via Style Consistency-Aware Response Ranking for Efficient Instruction-Tuning of Large Language Models

📅 2024-06-16
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the ambiguity in response style definition and the unclear relationship between data quality and model performance in instruction tuning, this work proposes the first dual-dimensional response style quantification framework based on linguistic form and instructional surprisal. We further develop an end-to-end, annotation-free, style-aware automatic ranking method. By jointly modeling style consistency, computing response embedding similarity, and estimating instructional surprisal, our approach enables efficient data filtering. Experiments demonstrate that fine-tuning on only 0.7% of the original dataset achieves performance on par with—or even surpassing—that of full-data tuning across code generation and open-domain question answering benchmarks, significantly improving training efficiency and generalization. Key contributions include: (i) establishing the quantifiability of response style; (ii) revealing a positive correlation between style consistency and model performance; and (iii) introducing the first style-driven paradigm for data subset selection.

Technology Category

Application Category

📝 Abstract
Recent studies emphasize that manually ensuring a consistent response style and maintaining high data quality in training sets can significantly improve the performance of fine-tuned Large Language Models (LLMs) while reducing the number of training examples needed. However, the precise definition of style and the relationship between style, data quality, and LLM performance remains unclear. This research identifies two key stylistic elements in responses: linguistic form and instructional surprisal. We find that, among training data of comparable quality, higher consistency in these response elements leads to better LLM performance. Inspired by this, we introduce Style Consistency-Aware Response Ranking (SCAR), which automatically prioritizes instruction-response pairs in the training set based on their response stylistic consistency. By selecting the most style-consistent examples, using 0.7% of the full dataset in certain cases, the fine-tuned LLMs can match or even surpass the performance of models trained on the entire dataset in coding and open-ended question-answering benchmarks. Code and data are available at https://github.com/zhuang-li/SCAR .
Problem

Research questions and friction points this paper is trying to address.

Defines response style elements in LLM training data
Links style consistency to improved LLM performance
Proposes automated style-aware data selection method (SCAR)
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatically ranks training data by style consistency
Uses linguistic form and instructional surprisal metrics
Achieves high performance with minimal data (0.7%)
🔎 Similar Papers
No similar papers found.