Scalable Fine-tuning from Multiple Data Sources:A First-Order Approximation Approach

📅 2024-09-28
🏛️ Conference on Empirical Methods in Natural Language Processing
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of efficiently selecting an optimal subset of auxiliary tasks from multiple sources to enhance fine-tuning performance on a target task. We propose a lightweight, training-free subset selection method grounded in meta-initialization. Our core contribution is a first-order gradient approximation framework that leverages first-order Taylor expansion and gradient sensitivity analysis to estimate fine-tuning loss for arbitrary task subsets—entirely on CPU, within seconds. Unlike conventional enumeration or reinforcement learning–based approaches, our method eliminates the need for repeated fine-tuning, achieving a 30× speedup over baselines with only 1% estimation error. Empirically, on instruction tuning and chain-of-thought tuning benchmarks, subsets selected by our method yield up to a 3.8% absolute improvement in downstream task performance, significantly outperforming existing subset selection techniques.

Technology Category

Application Category

📝 Abstract
We study the problem of fine-tuning a language model (LM) for a target task by optimally using the information from $n$ auxiliary tasks. This problem has broad applications in NLP, such as targeted instruction tuning and data selection in chain-of-thought fine-tuning. The key challenge of this problem is that not all auxiliary tasks are beneficial in improving the performance of the target task. Thus, selecting the right subset of auxiliary tasks is crucial. Conventional subset selection methods, such as forward and backward stepwise selection, are unsuitable for LM fine-tuning because they require repeated training on subsets of auxiliary tasks. This paper introduces a new algorithm for estimating model fine-tuning performance without requiring repeated training. Our algorithm first performs multitask training using data from all tasks to obtain a meta initialization. Then, we approximate the model fine-tuning loss of a subset using functional values and gradients from the meta initialization. Empirically, we find that this gradient-based approximation holds with remarkable accuracy for twelve transformer-based LMs. Thus, we can now estimate fine-tuning performances on CPUs within a few seconds. Finally, we fine-tune the pretrained base model once on the selected subset of tasks. We conduct extensive experiments to validate this approach, delivering a speedup of $30 imes$ over conventional subset selection while incurring only $1%$ error of the true fine-tuning performances. In downstream evaluations involving both instruction tuning and chain-of-thought fine-tuning, this loss-based selection approach improves over prior gradient or representation similarity-based methods for subset selection by up to $3.8%$.
Problem

Research questions and friction points this paper is trying to address.

Optimally selecting beneficial auxiliary tasks for LM fine-tuning
Avoiding repeated training in subset selection for efficiency
Accurately estimating fine-tuning performance via gradient approximation
Innovation

Methods, ideas, or system contributions that make the work stand out.

First-order approximation for fine-tuning performance
Meta initialization from multitask training
Gradient-based subset selection without retraining
🔎 Similar Papers
No similar papers found.