🤖 AI Summary
To address the inefficiency of data selection in instruction tuning, this paper proposes a data importance-aware filtering method. The core innovation is the introduction of the Model Instruction Weakness Value (MIWV), a dynamic metric that quantifies the contribution of each instruction instance toward mitigating model capability gaps—defined as the discrepancy between the model’s in-context learning (ICL) response and the ideal output. This formulation departs from conventional static quality scoring paradigms. By ranking and selecting instruction data according to MIWV, experiments demonstrate that fine-tuning on only the top 1% of instances yields superior performance across multiple benchmarks compared to full-dataset training. The method significantly improves instruction tuning efficiency while providing an interpretable, reproducible criterion for high-quality dataset curation.
📝 Abstract
Instruction tuning plays a critical role in enhancing the performance and efficiency of Large Language Models (LLMs). Its success depends not only on the quality of the instruction data but also on the inherent capabilities of the LLM itself. Some studies suggest that even a small amount of high-quality data can achieve instruction fine-tuning results that are on par with, or even exceed, those from using a full-scale dataset. However, rather than focusing solely on calculating data quality scores to evaluate instruction data, there is a growing need to select high-quality data that maximally enhances the performance of instruction tuning for a given LLM. In this paper, we propose the Model Instruction Weakness Value (MIWV) as a novel metric to quantify the importance of instruction data in enhancing model's capabilities. The MIWV metric is derived from the discrepancies in the model's responses when using In-Context Learning (ICL), helping identify the most beneficial data for enhancing instruction tuning performance. Our experimental results demonstrate that selecting only the top 1% of data based on MIWV can outperform training on the full dataset. Furthermore, this approach extends beyond existing research that focuses on data quality scoring for data selection, offering strong empirical evidence supporting the effectiveness of our proposed method.