Enhancing Large Language Model Performance with Gradient-Based Parameter Selection

📅 2024-06-21
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address severe parameter redundancy and low efficiency in large language model (LLM) fine-tuning, this paper proposes a task-aware dynamic sparse fine-tuning method based on gradient magnitude. The core innovation lies in using the absolute value of parameter gradients as a task-specific importance metric—the first such approach—and adaptively generating and updating learnable parameter masks during training, thereby departing from conventional fixed-subset selection paradigms. The method integrates seamlessly into standard supervised fine-tuning (SFT) without architectural modifications. Experiments demonstrate that it consistently outperforms full-parameter fine-tuning and state-of-the-art parameter-efficient methods (e.g., LoRA, Adapter) across multiple tasks, achieving faster convergence and superior final performance. Its computational overhead remains comparable to standard SFT, and it exhibits strong robustness to varying mask sparsity ratios.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have revolutionized lots of fields of research. Although it is well-known that fine-tuning is essential for enhancing the capabilities of LLMs, existing research suggests that there is potential redundancy in the fine-tuning process and therefore proposes to update only a subset of parameters. However, these methods fail to leverage the task-specific information to identify important parameters during training. Based on the insight that gradients inherently contain information on task-specific data, we propose Gradient-Mask Tuning (GMT), a method that selectively updates parameters during training based on their gradient information. Specifically, we compute the absolute values of the gradients and apply masking to those with relatively smaller magnitudes. Our empirical results across various tasks demonstrate that GMT not only outperforms traditional fine-tuning methods but also elevates the upper limits of LLM performance. Further analysis indicates that GMT exhibits insensitivity to mask ratio and possesses computational efficiency comparable to vanilla SFT.
Problem

Research questions and friction points this paper is trying to address.

Enhances LLM performance with gradient-based parameter selection
Reduces redundancy in fine-tuning process using task-specific gradients
Introduces Gradient-Mask Tuning for efficient and effective parameter updating
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient-based parameter selection
Task-specific gradient masking
Computational efficiency enhancement
Haoling Li
Haoling Li
Tsinghua University, MSRA
X
Xin Zhang
Microsoft Research
X
Xiao Liu
Microsoft Research
Yeyun Gong
Yeyun Gong
Microsoft Research Asia
Natural Language GenerationQuestion AnsweringPre-training
Y
Yifan Wang
Tsinghua University
Q
Qi Chen
Microsoft Research
P
Peng Cheng
Microsoft Research