🤖 AI Summary
To address inefficient data selection in task-specific fine-tuning of large language models, this paper proposes a data selection framework guided by a small set of target-task examples. The method formulates data selection as an optimal transport (OT) optimization problem incorporating distribution alignment and diversity regularization. It introduces a novel joint paradigm that integrates OT—ensuring fidelity to the target task distribution—with kernel density estimation—to suppress near-duplicates and enhance sample diversity—and establishes its theoretical connection to approximate nearest-neighbor search. Evaluated on instruction tuning, the approach achieves superior performance using only 1% of the training data, outperforming the full-data baseline with an average F1 gain of 1.5 points. Moreover, it supports both continual pretraining and instruction tuning, significantly improving fine-tuning efficiency and generalization across diverse downstream tasks.
📝 Abstract
Finetuning foundation models for specific tasks is an emerging paradigm in modern machine learning. The efficacy of task-specific finetuning largely depends on the selection of appropriate training data. We present TSDS (Task-Specific Data Selection), a framework to select data for task-specific model finetuning, guided by a small but representative set of examples from the target task. To do so, we formulate data selection for task-specific finetuning as an optimization problem with a distribution alignment loss based on optimal transport to capture the discrepancy between the selected data and the target distribution. In addition, we add a regularizer to encourage the diversity of the selected data and incorporate kernel density estimation into the regularizer to reduce the negative effects of near-duplicates among the candidate data. We connect our optimization problem to nearest neighbor search and design efficient algorithms to compute the optimal solution based on approximate nearest neighbor search techniques. We evaluate our method on data selection for both continued pretraining and instruction tuning of language models. We show that instruction tuning using data selected by our method with a 1% selection ratio often outperforms using the full dataset and beats the baseline selection methods by 1.5 points in F1 score on average.