🤖 AI Summary
Existing data evaluation and selection methods for instruction tuning lack systematicity, and commonly used metrics exhibit misalignment with downstream task performance. Method: We propose the first three-dimensional evaluation taxonomy for instruction tuning of large language models—encompassing quality, diversity, and importance—establishing a unified classification framework and enabling cross-method comparative analysis. Through a systematic review of 120+ evaluation approaches, we identify seven recurrent limitations and expose structural bottlenecks between metric design and task adaptability. Empirical comparisons further reveal deficiencies in generalizability, reliability, and interpretability of current methods. Contribution/Results: We articulate five key open challenges and outline corresponding research directions. All implementations are open-sourced on GitHub, providing both theoretical foundations and practical guidelines for efficient, reliable, and interpretable instruction data engineering.
📝 Abstract
Instruction tuning plays a critical role in aligning large language models (LLMs) with human preference. Despite the vast amount of open instruction datasets, naively training a LLM on all existing instructions may not be optimal and practical. To pinpoint the most beneficial datapoints, data assessment and selection methods have been proposed in the fields of natural language processing (NLP) and deep learning. However, under the context of instruction tuning, there still exists a gap in knowledge on what kind of data evaluation metrics can be employed and how they can be integrated into the selection mechanism. To bridge this gap, we present a comprehensive review on existing literature of data assessment and selection especially for instruction tuning of LLMs. We systematically categorize all applicable methods into quality-based, diversity-based, and importance-based ones where a unified, fine-grained taxonomy is structured. For each category, representative methods are elaborated to describe the landscape of relevant research. In addition, comparison between the latest methods is conducted on their officially reported results to provide in-depth discussions on their limitations. Finally, we summarize the open challenges and propose the promosing avenues for future studies. All related contents are available at https://github.com/yuleiqin/fantastic-data-engineering.