🤖 AI Summary
To address low data efficiency and catastrophic forgetting during supervised fine-tuning (SFT) of large language models (LLMs) on domain-specific tasks, this paper proposes Gradient-Aware Data Selection (GrADS). GrADS analyzes gradient magnitudes and their statistical distributions over training samples during initial forward-backward passes to adaptively select a subset that maximizes contribution to the target task while minimizing interference with general-purpose capabilities. Crucially, it requires no additional annotations or model modifications—only lightweight, self-supervised selection criteria derived solely from gradient information. Evaluated across medical, legal, and financial domains, GrADS achieves superior performance using only 5% of the full SFT dataset and delivers substantial gains at 50% data usage, while significantly mitigating degradation of general-domain competence. By enabling highly efficient, robust, and scalable domain adaptation, GrADS establishes a novel, principled paradigm for gradient-driven data curation in LLM fine-tuning.
📝 Abstract
Despite large language models (LLMs) have achieved impressive achievements across numerous tasks, supervised fine-tuning (SFT) remains essential for adapting these models to specialized domains. However, SFT for domain specialization can be resource-intensive and sometimes leads to a deterioration in performance over general capabilities due to catastrophic forgetting (CF). To address these issues, we propose a self-adaptive gradient-aware data selection approach (GrADS) for supervised fine-tuning of LLMs, which identifies effective subsets of training data by analyzing gradients obtained from a preliminary training phase. Specifically, we design self-guided criteria that leverage the magnitude and statistical distribution of gradients to prioritize examples that contribute the most to the model's learning process. This approach enables the acquisition of representative samples that enhance LLMs understanding of domain-specific tasks. Through extensive experimentation with various LLMs across diverse domains such as medicine, law, and finance, GrADS has demonstrated significant efficiency and cost-effectiveness. Remarkably, utilizing merely 5% of the selected GrADS data, LLMs already surpass the performance of those fine-tuned on the entire dataset, and increasing to 50% of the data results in significant improvements! With catastrophic forgetting substantially mitigated simultaneously. We will release our code for GrADS later.