Learn More, Forget Less: A Gradient-Aware Data Selection Approach for LLM

📅 2025-11-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low data efficiency and catastrophic forgetting during supervised fine-tuning (SFT) of large language models (LLMs) on domain-specific tasks, this paper proposes Gradient-Aware Data Selection (GrADS). GrADS analyzes gradient magnitudes and their statistical distributions over training samples during initial forward-backward passes to adaptively select a subset that maximizes contribution to the target task while minimizing interference with general-purpose capabilities. Crucially, it requires no additional annotations or model modifications—only lightweight, self-supervised selection criteria derived solely from gradient information. Evaluated across medical, legal, and financial domains, GrADS achieves superior performance using only 5% of the full SFT dataset and delivers substantial gains at 50% data usage, while significantly mitigating degradation of general-domain competence. By enabling highly efficient, robust, and scalable domain adaptation, GrADS establishes a novel, principled paradigm for gradient-driven data curation in LLM fine-tuning.

Technology Category

Application Category

📝 Abstract
Despite large language models (LLMs) have achieved impressive achievements across numerous tasks, supervised fine-tuning (SFT) remains essential for adapting these models to specialized domains. However, SFT for domain specialization can be resource-intensive and sometimes leads to a deterioration in performance over general capabilities due to catastrophic forgetting (CF). To address these issues, we propose a self-adaptive gradient-aware data selection approach (GrADS) for supervised fine-tuning of LLMs, which identifies effective subsets of training data by analyzing gradients obtained from a preliminary training phase. Specifically, we design self-guided criteria that leverage the magnitude and statistical distribution of gradients to prioritize examples that contribute the most to the model's learning process. This approach enables the acquisition of representative samples that enhance LLMs understanding of domain-specific tasks. Through extensive experimentation with various LLMs across diverse domains such as medicine, law, and finance, GrADS has demonstrated significant efficiency and cost-effectiveness. Remarkably, utilizing merely 5% of the selected GrADS data, LLMs already surpass the performance of those fine-tuned on the entire dataset, and increasing to 50% of the data results in significant improvements! With catastrophic forgetting substantially mitigated simultaneously. We will release our code for GrADS later.
Problem

Research questions and friction points this paper is trying to address.

Addresses catastrophic forgetting in LLM fine-tuning through gradient analysis
Selects optimal training subsets to enhance domain-specific performance
Reduces resource requirements while maintaining general capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient-aware data selection for fine-tuning
Self-guided criteria using gradient magnitude
Prioritizes examples enhancing domain-specific learning
Y
Yibai Liu
Fu Foundation School of Engineering and Applied Science, Columbia University
Shihang Wang
Shihang Wang
DAMO Academy, Alibaba Inc.
Natural Language Processing
Z
Zeming Liu
School of Computer Science and Engineering, Beihang University
Z
Zheming Song
School of Computer Science and Engineering, Beihang University
J
Junzhe Wang
School of Computer Science and Engineering, Beihang University
J
Jingjing Liu
School of Computer Science and Engineering, Beihang University
Qingjie Liu
Qingjie Liu
Professor, School of Computer Science and Engineering, Beihang University
Computer Vision and Pattern Recognition
Yunhong Wang
Yunhong Wang
Professor, School of Computer Science and Engineering, Beihang University
BiometricsPattern RecognitionImage ProcessingComputer Vision