Efficient Data Selection at Scale via Influence Distillation

📅 2025-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Data selection for efficient fine-tuning of large language models (LLMs) remains challenging due to the high computational cost of influence estimation. Method: This paper proposes the first data selection framework grounded in second-order influence propagation. It theoretically justifies a second-order influence function to quantify how training samples affect target distribution performance and introduces landmark samples for efficient approximation, enabling adaptive weight derivation under both Gradient Descent and Adam optimizers. Contribution/Results: It is the first work to rigorously integrate mathematically sound second-order influence analysis into data selection—balancing theoretical guarantees with computational scalability. Evaluated on instruction-tuning Llama and Qwen models using Tulu V2, the method achieves performance on par with or exceeding state-of-the-art methods across GSM8k, SQuAD, and MMLU, while accelerating data filtering by 3.5×.

Technology Category

Application Category

📝 Abstract
Effective data selection is critical for efficient training of modern Large Language Models (LLMs). This paper introduces Influence Distillation, a novel, mathematically-justified framework for data selection that employs second-order information to optimally weight training samples. By distilling each sample's influence on a target distribution, our method assigns model-specific weights that are used to select training data for LLM fine-tuning, guiding it toward strong performance on the target domain. We derive these optimal weights for both Gradient Descent and Adam optimizers. To ensure scalability and reduce computational cost, we propose a $ extit{landmark-based approximation}$: influence is precisely computed for a small subset of"landmark"samples and then efficiently propagated to all other samples to determine their weights. We validate Influence Distillation by applying it to instruction tuning on the Tulu V2 dataset, targeting a range of tasks including GSM8k, SQuAD, and MMLU, across several models from the Llama and Qwen families. Experiments show that Influence Distillation matches or outperforms state-of-the-art performance while achieving up to $3.5 imes$ faster selection.
Problem

Research questions and friction points this paper is trying to address.

Optimal data selection for efficient LLM training
Influence Distillation framework using second-order information
Scalable landmark-based approximation for reduced cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses second-order information for optimal weighting
Proposes landmark-based approximation for scalability
Derives optimal weights for Gradient Descent and Adam
🔎 Similar Papers
No similar papers found.