🤖 AI Summary
To address the inefficiencies in diffusion model fine-tuning—namely, underutilization of redundant parameters, susceptibility to overfitting, and high memory overhead—this paper proposes Sparse Low-Rank Adaptation (SaRA). SaRA identifies the 10–20% least-magnitude weights in the pretrained model via parameter importance analysis, freezes them, and repurposes them for low-rank incremental updates. It introduces nuclear-norm-regularized sparse low-rank decomposition, coupled with progressive parameter adjustment and unstructured backpropagation, enabling lightweight, task-specific knowledge acquisition. Evaluated on Stable Diffusion variants, SaRA substantially outperforms baselines such as LoRA: it achieves comparable or superior image generation quality and generalization while reducing GPU memory consumption by over 40%. Moreover, SaRA requires only a single-line code integration and is fully compatible with existing fine-tuning pipelines.
📝 Abstract
In recent years, the development of diffusion models has led to significant progress in image and video generation tasks, with pre-trained models like the Stable Diffusion series playing a crucial role. Inspired by model pruning which lightens large pre-trained models by removing unimportant parameters, we propose a novel model fine-tuning method to make full use of these ineffective parameters and enable the pre-trained model with new task-specified capabilities. In this work, we first investigate the importance of parameters in pre-trained diffusion models, and discover that the smallest 10% to 20% of parameters by absolute values do not contribute to the generation process. Based on this observation, we propose a method termed SaRA that re-utilizes these temporarily ineffective parameters, equating to optimizing a sparse weight matrix to learn the task-specific knowledge. To mitigate overfitting, we propose a nuclear-norm-based low-rank sparse training scheme for efficient fine-tuning. Furthermore, we design a new progressive parameter adjustment strategy to make full use of the re-trained/finetuned parameters. Finally, we propose a novel unstructural backpropagation strategy, which significantly reduces memory costs during fine-tuning. Our method enhances the generative capabilities of pre-trained models in downstream applications and outperforms traditional fine-tuning methods like LoRA in maintaining model's generalization ability. We validate our approach through fine-tuning experiments on SD models, demonstrating significant improvements. SaRA also offers a practical advantage that requires only a single line of code modification for efficient implementation and is seamlessly compatible with existing methods.