🤖 AI Summary
To address parameter redundancy in Low-Rank Adaptation (LoRA), this paper proposes SymLoRA—a computationally efficient fine-tuning method that models adapter weights as symmetric low-rank matrices and replaces the conventional BA decomposition with a spectral decomposition Q diag(Λ) Qᵀ. Its core innovation lies in the first introduction of symmetry constraints and spectral parameterization for LoRA, coupled with an SVD-inspired initialization strategy, enabling more concise theoretical modeling and improved optimization stability in end-to-end training. Evaluated across multiple NLP benchmarks, SymLoRA reduces trainable parameters by 48%–52% compared to standard LoRA—halving parameter count—thereby significantly lowering GPU memory consumption and computational overhead. Crucially, it retains downstream task performance on par with LoRA, with no discernible accuracy degradation.
📝 Abstract
In this paper, we introduce Symmetric Low-Rank Adapters, an optimized variant of LoRA with even fewer weights. This method utilizes Low-Rank Symmetric Weight Matrices to learn downstream tasks more efficiently. Traditional LoRA accumulates fine-tuning weights with the original pre-trained weights via a Singular Value Decomposition (SVD) like approach, i.e., model weights are fine-tuned via updates of the form $BA$ (where $B in mathbb{R}^{n imes r}$, $A in mathbb{R}^{r imes n}$, and $r$ is the rank of the merged weight matrix). In contrast, our approach, named SymLoRA, represents fine-tuning weights as a Spectral Decomposition, i.e., $Q , diag(Lambda), Q^T$, where $Q in mathbb{R}^{n imes r}$ and $Lambda in mathbb{R}^r$. SymLoRA requires approximately half of the finetuning weights. Here, we show that this approach has negligible losses in downstream efficacy.