🤖 AI Summary
This work addresses the high GPU memory cost of full-parameter fine-tuning in large language models and the performance and adaptability limitations of existing low-rank or static layer-wise approaches. The authors propose a gradient-based adaptive layer-wise importance sampling framework that introduces, for the first time, a dynamic layer importance evaluation mechanism sensitive to both task characteristics and training stage. This mechanism quantifies each layer’s contribution in real time using the norm of mean gradients and integrates adaptive sampling with an optimizer state offloading strategy—featuring computation-communication overlap—to enhance memory efficiency. Experiments demonstrate that the method achieves up to a 4.38 percentage point improvement in average accuracy and reduces memory consumption by as much as 19.97% across multiple models and benchmarks, significantly outperforming current state-of-the-art techniques.
📝 Abstract
Full-parameter fine-tuning of large language models is constrained by substantial GPU memory requirements. Low-rank adaptation methods mitigate this challenge by updating only a subset of parameters. However, these approaches often limit model expressiveness and yield lower performance than full-parameter fine-tuning. Layer-wise fine-tuning methods have emerged as an alternative, enabling memory-efficient training through static layer importance sampling strategies. However, these methods overlook variations in layer importance across tasks and training stages, resulting in suboptimal performance on downstream tasks. To address these limitations, we propose GRASS, a gradient-based adaptive layer-wise importance sampling framework. GRASS utilizes mean gradient norms as a task-aware and training-stage-aware metric for estimating layer importance. Furthermore, GRASS adaptively adjusts layer sampling probabilities through an adaptive training strategy. We also introduce a layer-wise optimizer state offloading mechanism that overlaps computation and communication to further reduce memory usage while maintaining comparable training throughput. Extensive experiments across multiple models and benchmarks demonstrate that GRASS consistently outperforms state-of-the-art methods, achieving an average accuracy improvement of up to 4.38 points and reducing memory usage by up to 19.97\%.