Towards Symmetric Low-Rank Adapters

📅 2025-03-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address parameter redundancy in Low-Rank Adaptation (LoRA), this paper proposes SymLoRA—a computationally efficient fine-tuning method that models adapter weights as symmetric low-rank matrices and replaces the conventional BA decomposition with a spectral decomposition Q diag(Λ) Qᵀ. Its core innovation lies in the first introduction of symmetry constraints and spectral parameterization for LoRA, coupled with an SVD-inspired initialization strategy, enabling more concise theoretical modeling and improved optimization stability in end-to-end training. Evaluated across multiple NLP benchmarks, SymLoRA reduces trainable parameters by 48%–52% compared to standard LoRA—halving parameter count—thereby significantly lowering GPU memory consumption and computational overhead. Crucially, it retains downstream task performance on par with LoRA, with no discernible accuracy degradation.

Technology Category

Application Category

📝 Abstract
In this paper, we introduce Symmetric Low-Rank Adapters, an optimized variant of LoRA with even fewer weights. This method utilizes Low-Rank Symmetric Weight Matrices to learn downstream tasks more efficiently. Traditional LoRA accumulates fine-tuning weights with the original pre-trained weights via a Singular Value Decomposition (SVD) like approach, i.e., model weights are fine-tuned via updates of the form $BA$ (where $B in mathbb{R}^{n imes r}$, $A in mathbb{R}^{r imes n}$, and $r$ is the rank of the merged weight matrix). In contrast, our approach, named SymLoRA, represents fine-tuning weights as a Spectral Decomposition, i.e., $Q , diag(Lambda), Q^T$, where $Q in mathbb{R}^{n imes r}$ and $Lambda in mathbb{R}^r$. SymLoRA requires approximately half of the finetuning weights. Here, we show that this approach has negligible losses in downstream efficacy.
Problem

Research questions and friction points this paper is trying to address.

Optimize LoRA with fewer weights using symmetric matrices
Improve efficiency in learning downstream tasks
Reduce finetuning weights by half without significant loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Symmetric Low-Rank Adapters reduce weight count
Uses Spectral Decomposition for efficient learning
Halves finetuning weights with minimal efficacy loss
🔎 Similar Papers
No similar papers found.