DELIFT: Data Efficient Language model Instruction Fine Tuning

📅 2024-11-07
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address resource waste caused by data redundancy in large language model (LLM) instruction tuning, this paper proposes the first unified, data-efficient selection framework applicable across three stages: instruction tuning, task-specific fine-tuning, and continual learning. Our method introduces a submodular function-driven data selection mechanism grounded in pairwise utility measurement—enabling gradient-free quantification of each sample’s information value toward enhancing model capabilities. This facilitates coordinated optimization across all stages without requiring expensive gradient computations. Extensive experiments across diverse tasks and model scales demonstrate that our approach reduces fine-tuning data volume by 70% on average while maintaining or even improving performance—outperforming existing data selection methods. The framework establishes a scalable, gradient-free, and stage-consistent paradigm for efficient LLM adaptation.

Technology Category

Application Category

📝 Abstract
Fine-tuning large language models (LLMs) is essential for enhancing their performance on specific tasks but is often resource-intensive due to redundant or uninformative data. To address this inefficiency, we introduce DELIFT (Data Efficient Language model Instruction Fine-Tuning), a novel algorithm that systematically optimizes data selection across the three key stages of fine-tuning: (1) instruction tuning, (2) task-specific fine-tuning (e.g., reasoning, question-answering), and (3) continual fine-tuning (e.g., incorporating new data versions). Unlike existing methods that focus on single-stage optimization or rely on computationally intensive gradient calculations, DELIFT operates efficiently across all stages. Central to our approach is a pairwise utility metric that quantifies how beneficial a data sample is for improving the model's responses to other samples, effectively measuring the informational value relative to the model's current capabilities. By leveraging different submodular functions applied to this metric, DELIFT selects diverse and optimal subsets that are useful across all stages of fine-tuning. Experiments across various tasks and model scales demonstrate that DELIFT can reduce the fine-tuning data size by up to 70% without compromising performance, offering significant computational savings and outperforming existing methods in both efficiency and efficacy.
Problem

Research questions and friction points this paper is trying to address.

Optimizes data selection for fine-tuning large language models efficiently.
Reduces fine-tuning data size by up to 70% without performance loss.
Improves model responses using a pairwise utility metric.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizes data selection across fine-tuning stages
Uses pairwise utility metric for data sample evaluation
Reduces data size by 70% without performance loss
🔎 Similar Papers
No similar papers found.
I
Ishika Agarwal
University of Illinois Urbana-Champaign
K
Krishna Killamsetty
IBM Research
Lucian Popa
Lucian Popa
IBM Almaden Research Center
Data ManagementData IntegrationEntity resolutionEntity linking
M
Marina Danilevksy
IBM Research