Leave it to the Specialist: Repair Sparse LLMs with Sparse Fine-Tuning via Sparsity Evolution

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language model (LLM) pruning under high sparsity rates leads to severe performance degradation, while existing fine-tuning methods—such as LoRA or full fine-tuning—disrupt the learned sparse structure, undermining sparsity-preserving benefits. Method: We propose Sparsity-Evolving Fine-Tuning (SEFT), a novel framework that dynamically optimizes sparse connectivity topologies during fine-tuning. SEFT maintains global sparsity via sensitivity-driven pruning and a weight drop-and-grow strategy, enabling adaptive task-specific adaptation without structural corruption. Contribution/Results: SEFT introduces the first dynamic sparse topology evolution mechanism, resolving the long-standing trade-off between sparsity preservation and performance recovery. Extensive evaluation on LLaMA, DeepSeek, and Mistral architectures demonstrates that SEFT consistently outperforms SparseGPT, Wanda, and LoRA across multiple benchmarks—achieving superior accuracy, reduced GPU memory footprint, and faster training convergence. It enables efficient repair and deployment of mainstream sparse LLMs.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have achieved remarkable success across various tasks but face deployment challenges due to their massive computational demands. While post-training pruning methods like SparseGPT and Wanda can effectively reduce the model size, but struggle to maintain model performance at high sparsity levels, limiting their utility for downstream tasks. Existing fine-tuning methods, such as full fine-tuning and LoRA, fail to preserve sparsity as they require updating the whole dense metrics, not well-suited for sparse LLMs. In this paper, we propose Sparsity Evolution Fine-Tuning (SEFT), a novel method designed specifically for sparse LLMs. SEFT dynamically evolves the sparse topology of pruned models during fine-tuning, while preserving the overall sparsity throughout the process. The strengths of SEFT lie in its ability to perform task-specific adaptation through a weight drop-and-grow strategy, enabling the pruned model to self-adapt its sparse connectivity pattern based on the target dataset. Furthermore, a sensitivity-driven pruning criterion is employed to ensure that the desired sparsity level is consistently maintained throughout fine-tuning. Our experiments on various LLMs, including LLaMA families, DeepSeek, and Mistral, across a diverse set of benchmarks demonstrate that SEFT achieves stronger performance while offering superior memory and time efficiency compared to existing baselines. Our code is publicly available at: https://github.com/QiaoXiao7282/SEFT.
Problem

Research questions and friction points this paper is trying to address.

Maintain performance in highly sparse LLMs
Preserve sparsity during fine-tuning of pruned models
Adapt sparse connectivity for target tasks efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic sparse topology evolution during fine-tuning
Weight drop-and-grow strategy for adaptation
Sensitivity-driven pruning maintains sparsity levels
🔎 Similar Papers
No similar papers found.
Qiao Xiao
Qiao Xiao
Eindhoven University of Technology
Deep LearningAI EfficiencySparse Neural Networks
A
Alan Ansell
University of Cambridge
Boqian Wu
Boqian Wu
University of Twente
Machine LearningSparse Neural NetworksComputer Vision
L
Lu Yin
University of Surrey, Eindhoven University of Technology
Mykola Pechenizkiy
Mykola Pechenizkiy
Eindhoven University of Technology
data miningpredictive analyticsfairnesstransparencyaccountability
S
Shiwei Liu
University of Oxford, Eindhoven University of Technology
D
D. Mocanu
University of Luxembourg, Eindhoven University of Technology