🤖 AI Summary
In instruction tuning of large language models, data selection struggles to simultaneously optimize capability strength and task balance. This paper identifies an inherent task bias in influence estimation—a key metric for data selection—and proposes Balanced Influence-driven Data Selection (BIDS), the first framework explicitly designed to reconcile capability performance with task distributional fairness. BIDS employs instance-level influence normalization and a task-representativeness-guided iterative greedy selection strategy. Evaluated on Llama-3 and Mistral-v0.3, it achieves superior performance using only 15% of the training data compared to full-data fine-tuning. On seven benchmarks spanning reasoning, writing, mathematics, and three other capability domains, the resulting models attain state-of-the-art overall accuracy while exhibiting significantly improved task-wise performance balance. The core contributions are: (i) uncovering and theoretically characterizing task bias in influence estimation; (ii) proposing a bias-correction mechanism; and (iii) establishing the first data selection paradigm that jointly optimizes both capability strength and task distributional equity.
📝 Abstract
Selecting appropriate training data is crucial for effective instruction fine-tuning of large language models (LLMs), which aims to (1) elicit strong capabilities, and (2) achieve balanced performance across a diverse range of tasks. Influence-based methods show promise in achieving (1) by estimating the contribution of each training example to the model's predictions, but often struggle with (2). Our systematic investigation reveals that this underperformance can be attributed to an inherent bias where certain tasks intrinsically have greater influence than others. As a result, data selection is often biased towards these tasks, not only hurting the model's performance on others but also, counterintuitively, harms performance on these high-influence tasks themselves. As a remedy, we propose BIDS, a Balanced and Influential Data Selection algorithm. BIDS first normalizes influence scores of the training data, and then iteratively balances data selection by choosing the training example with the highest influence on the most underrepresented task. Experiments with both Llama-3 and Mistral-v0.3 on seven benchmarks spanning five diverse capabilities show that BIDS consistently outperforms both state-of-the-art influence-based algorithms and other non-influence-based selection frameworks. Surprisingly, training on a 15% subset selected by BIDS can even outperform full-dataset training with a much more balanced performance. Our analysis further highlights the importance of both instance-level normalization and iterative optimization of selected data for balanced learning of diverse capabilities.