SingLoRA: Low Rank Adaptation Using a Single Matrix

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In LoRA fine-tuning, scale mismatch between the two low-rank matrices causes training instability. This paper proposes SingLoRA: a method that replaces the dual low-rank decomposition with a single low-rank matrix multiplied by its transpose to parameterize weight updates—thereby eliminating scale conflict at its source, enhancing training stability, and reducing trainable parameters by ~40%. Grounded in low-rank matrix decomposition and Neural Tangent Kernel (NTK) theory, SingLoRA guarantees stable feature learning under the infinite-width network assumption. Extensive experiments on LLaMA and Stable Diffusion demonstrate its effectiveness: on MNLI, it achieves 91.3% accuracy using only 60% of LoRA’s parameters—outperforming both LoRA and LoRA+. In DreamBooth, it significantly improves generation quality, attaining a DINO score of 0.151. SingLoRA introduces the first *single-matrix transposition reconstruction* paradigm, offering a novel theoretical foundation and practical framework for parameter-efficient fine-tuning.

Technology Category

Application Category

📝 Abstract
Low-Rank Adaptation (LoRA) has significantly advanced parameter-efficient fine-tuning of large pretrained models. LoRA augments the pre-trained weights of a model by adding the product of two smaller matrices that together form a low-rank matrix update. Recent research has shown that scale disparities between these two matrices often cause unstable training dynamics, leading to suboptimal performance. In this paper, we propose SingLoRA, which reformulates low-rank adaptation by learning the weights update as a decomposition of a single low-rank matrix multiplied by its transpose. This simple design inherently removes inter-matrix scale conflicts, ensuring stable optimization, and roughly halves the parameter count. We analyze SingLoRA within the infinite-width neural network framework, showing that it guarantees stable feature learning by construction. Extensive experiments on multiple tasks validate these benefits. In common sense reasoning, fine-tuning LLama 7B on MNLI with SingLoRA achieves 91.3% accuracy - surpassing LoRA (89.1%) and LoRA+ (90.2%) - while using only 60% of their parameter budget. In image generation, fine-tuning Stable Diffusion with SingLoRA significantly improves image fidelity on DreamBooth, achieving a DINO similarity score of 0.151, compared to scores of 0.148 and 0.143 for DoRA and LoRA, respectively.
Problem

Research questions and friction points this paper is trying to address.

Resolves unstable training in LoRA due to scale disparities
Reduces parameter count by half in low-rank adaptation
Improves performance in model fine-tuning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses single low-rank matrix decomposition
Eliminates inter-matrix scale conflicts
Reduces parameter count by half
🔎 Similar Papers
No similar papers found.