SRLoRA: Subspace Recomposition in Low-Rank Adaptation via Importance-Based Fusion and Reinitialization

πŸ“… 2025-05-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
LoRA’s fixed low-rank subspace limits representational capacity, hindering downstream task performance. To address this, we propose Dynamic-LoRAβ€”a parameter-efficient fine-tuning method that dynamically refreshes and expands the low-rank subspace during training without increasing parameter count. Specifically, Dynamic-LoRA integrates LoRA weights in an importance-driven manner and reinitializes them along dominant singular directions derived from online SVD decomposition. This enables adaptive subspace evolution while preserving computational and memory efficiency. Crucially, it introduces no additional hyperparameters and is fully compatible with diverse PEFT scenarios. Extensive experiments on the GLUE benchmark and multiple image classification tasks demonstrate that Dynamic-LoRA consistently outperforms standard LoRA: it accelerates convergence by 15–30% and improves average accuracy by 1.2–2.8 percentage points. These results validate its effectiveness, generality across modalities, and training efficiency.

Technology Category

Application Category

πŸ“ Abstract
Low-Rank Adaptation (LoRA) is a widely adopted parameter-efficient fine-tuning (PEFT) method that injects two trainable low-rank matrices (A and B) into frozen pretrained models. While efficient, LoRA constrains updates to a fixed low-rank subspace (Delta W = BA), which can limit representational capacity and hinder downstream performance. We introduce Subspace Recomposition in Low-Rank Adaptation (SRLoRA) via importance-based fusion and reinitialization, a novel approach that enhances LoRA's expressiveness without compromising its lightweight structure. SRLoRA assigns importance scores to each LoRA pair (a column of B and the corresponding row of A), and dynamically recomposes the subspace during training. Less important pairs are fused into the frozen backbone, freeing capacity to reinitialize new pairs along unused principal directions derived from the pretrained weight's singular value decomposition. This mechanism enables continual subspace refreshment and richer adaptation over time, without increasing the number of trainable parameters. We evaluate SRLoRA on both language and vision tasks, including the GLUE benchmark and various image classification datasets. SRLoRA consistently achieves faster convergence and improved accuracy over standard LoRA, demonstrating its generality, efficiency, and potential for broader PEFT applications.
Problem

Research questions and friction points this paper is trying to address.

Enhances LoRA's expressiveness without increasing parameters
Dynamically recomposes subspace via importance-based fusion
Improves convergence and accuracy in PEFT tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic subspace recomposition via importance-based fusion
Reinitialization along unused principal directions
Enhanced LoRA expressiveness without extra parameters
πŸ”Ž Similar Papers
No similar papers found.