π€ AI Summary
This work addresses the instability in feature learning induced by non-zero initialization in Low-Rank Adaptation (LoRA), which can degrade fine-tuning performance. Through theoretical analysis, the authors demonstrate that LoRAβs stability critically depends on specific hyperparameters and zero initialization. To reconcile the benefits of non-zero initialization with training stability, they propose Stable-LoRA, a method that dynamically shrinks the low-rank matrix A during early training to restore and enhance feature learning stability. By integrating low-rank decomposition with a dynamic weight shrinkage strategy, Stable-LoRA incurs negligible computational overhead and no additional memory cost. Extensive experiments across diverse models and tasks consistently show that Stable-LoRA outperforms existing baselines, significantly improving fine-tuning effectiveness.
π Abstract
Low-Rank Adaptation (LoRA) is a widely adopted parameter-efficient method for fine-tuning Large Langauge Models. It updates the weight matrix as $W=W_0+sBA$, where $W_0$ is the original frozen weight, $s$ is a scaling factor and $A$,$B$ are trainable low-rank matrices. Despite its robust empirical effectiveness, the theoretical foundations of LoRA remain insufficiently understood, particularly with respect to feature learning stability. In this paper, we first establish that, LoRA can, in principle, naturally achieve and sustain stable feature learning (i.e., be self-stabilized) under appropriate hyper-parameters and initializations of $A$ and $B$. However, we also uncover a fundamental limitation that the necessary non-zero initialization of $A$ compromises self-stability, leading to suboptimal performances. To address this challenge, we propose Stable-LoRA, a weight-shrinkage optimization strategy that dynamically enhances stability of LoRA feature learning. By progressively shrinking $A$ during the earliest training steps, Stable-LoRA is both theoretically and empirically validated to effectively eliminate instability of LoRA feature learning while preserving the benefits of the non-zero start. Experiments show that Stable-LoRA consistently outperforms other baselines across diverse models and tasks, with no additional memory usage and only negligible computation overheads. The code is available at https://github.com/Yize-Wu/Stable-LoRA.