Refining Salience-Aware Sparse Fine-Tuning Strategies for Language Models

๐Ÿ“… 2024-12-18
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses efficient fine-tuning of large language models (LLMs). We propose Significance-Aware Sparse Parameter-Efficient Fine-Tuning (SPEFT), a novel paradigm distinct from mainstream low-rank or adapter-based approaches. SPEFT dynamically estimates parameter significance using gradient-based zero-cost metrics and employs a static binary mask to identify and lock the most significant sparse parameter subset for optimization. To our knowledge, this is the first systematic validation of zero-cost neural architecture searchโ€“inspired significance measures in sparse PEFT, revealing that simple gradient magnitude outperforms more complex proxies. Crucially, SPEFT achieves stable performance superior to LoRA and other state-of-the-art PEFT methods while significantly reducing computational and memory overhead. Extensive experiments across diverse NLP tasks confirm its effectiveness and robustness. The implementation is publicly released, establishing a new benchmark for efficient, reproducible, and lightweight LLM fine-tuning.

Technology Category

Application Category

๐Ÿ“ Abstract
Parameter-Efficient Fine-Tuning (PEFT) has gained prominence through low-rank adaptation methods like LoRA. In this paper, we focus on sparsity-based PEFT (SPEFT), which introduces trainable sparse adaptations to the weight matrices in the model, offering greater flexibility in selecting fine-tuned parameters compared to low-rank methods. We conduct the first systematic evaluation of salience metrics for SPEFT, inspired by zero-cost NAS proxies, and identify simple gradient-based metrics is reliable, and results are on par with the best alternatives, offering both computational efficiency and robust performance. Additionally, we compare static and dynamic masking strategies, finding that static masking, which predetermines non-zero entries before training, delivers efficiency without sacrificing performance, while dynamic masking offers no substantial benefits. Across NLP tasks, a simple gradient-based, static SPEFT consistently outperforms other fine-tuning methods for LLMs, providing a simple yet effective baseline for SPEFT. Our work challenges the notion that complexity is necessary for effective PEFT. Our work is open source and available to the community at [https://github.com/0-ml/speft].
Problem

Research questions and friction points this paper is trying to address.

Evaluating salience metrics for sparsity-based PEFT in language models
Comparing static vs dynamic masking strategies for efficient fine-tuning
Proposing gradient-based static SPEFT as simple effective baseline
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparsity-based PEFT with trainable adaptations
Gradient-based salience metrics for efficient tuning
Static masking strategy for robust performance
๐Ÿ”Ž Similar Papers
No similar papers found.
X
Xinxin Liu
Southern University of Science and Technology, China; Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
A
Aaron Thomas
University of Birmingham, UK
C
Cheng Zhang
Imperial College London, UK
Jianyi Cheng
Jianyi Cheng
University of Edinburgh
high-level synthesiscomputer architectureformal methodsmachine learninghardware security
Yiren Zhao
Yiren Zhao
University of Toronto
Computer NetworksOptical NetworksDatacenter Networks
Xitong Gao
Xitong Gao
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences
Efficient Training and InferenceAI Security and Privacy