GRASS: Gradient-based Adaptive Layer-wise Importance Sampling for Memory-efficient Large Language Model Fine-tuning

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high GPU memory cost of full-parameter fine-tuning in large language models and the performance and adaptability limitations of existing low-rank or static layer-wise approaches. The authors propose a gradient-based adaptive layer-wise importance sampling framework that introduces, for the first time, a dynamic layer importance evaluation mechanism sensitive to both task characteristics and training stage. This mechanism quantifies each layer’s contribution in real time using the norm of mean gradients and integrates adaptive sampling with an optimizer state offloading strategy—featuring computation-communication overlap—to enhance memory efficiency. Experiments demonstrate that the method achieves up to a 4.38 percentage point improvement in average accuracy and reduces memory consumption by as much as 19.97% across multiple models and benchmarks, significantly outperforming current state-of-the-art techniques.
📝 Abstract
Full-parameter fine-tuning of large language models is constrained by substantial GPU memory requirements. Low-rank adaptation methods mitigate this challenge by updating only a subset of parameters. However, these approaches often limit model expressiveness and yield lower performance than full-parameter fine-tuning. Layer-wise fine-tuning methods have emerged as an alternative, enabling memory-efficient training through static layer importance sampling strategies. However, these methods overlook variations in layer importance across tasks and training stages, resulting in suboptimal performance on downstream tasks. To address these limitations, we propose GRASS, a gradient-based adaptive layer-wise importance sampling framework. GRASS utilizes mean gradient norms as a task-aware and training-stage-aware metric for estimating layer importance. Furthermore, GRASS adaptively adjusts layer sampling probabilities through an adaptive training strategy. We also introduce a layer-wise optimizer state offloading mechanism that overlaps computation and communication to further reduce memory usage while maintaining comparable training throughput. Extensive experiments across multiple models and benchmarks demonstrate that GRASS consistently outperforms state-of-the-art methods, achieving an average accuracy improvement of up to 4.38 points and reducing memory usage by up to 19.97\%.
Problem

Research questions and friction points this paper is trying to address.

large language model
memory-efficient fine-tuning
layer-wise importance
adaptive sampling
gradient-based optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

gradient-based importance sampling
adaptive layer-wise fine-tuning
memory-efficient LLM training
optimizer state offloading
task-aware sampling
🔎 Similar Papers
No similar papers found.
K
Kaiyuan Tian
National University of Defense Technology
Y
Yu Tang
Information Support Force Engineering University
G
Gongqingjian Jiang
National University of Defense Technology
B
Baihui Liu
National University of Defense Technology
Y
Yifu Gao
National University of Defense Technology
X
Xialin Su
National University of Defense Technology
Linbo Qiao
Linbo Qiao
NUDT
Stochastic OptimizationDistributed OptimizationLarge-scale Machine Learning
Dongsheng Li
Dongsheng Li
Professor, School of Computer Science, National University of Defense Technology
Distributed ComputingParallel ComputingCloud ComputingPeer-to-Peer ComputingBig Data