Nonlinearity as Rank: Generative Low-Rank Adapter with Radial Basis Functions

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes Generative Low-Rank Adaptation (GenLoRA), a novel parameter-efficient fine-tuning method that addresses the parameter redundancy inherent in standard Low-Rank Adaptation (LoRA). Unlike conventional LoRA, which explicitly stores base vectors to expand model capacity, GenLoRA dynamically and implicitly generates the basis vectors of low-rank matrices by combining lightweight nonlinear functions—such as radial basis functions—with latent vectors. This paradigm shift eliminates the need for explicit storage of basis components, significantly enhancing parameter efficiency. Empirical results across diverse model architectures and datasets demonstrate that GenLoRA achieves higher effective rank with fewer parameters and consistently outperforms standard LoRA in downstream task performance.

Technology Category

Application Category

📝 Abstract
Low-rank adaptation (LoRA) approximates the update of a pretrained weight matrix using the product of two low-rank matrices. However, standard LoRA follows an explicit-rank paradigm, where increasing model capacity requires adding more rows or columns (i.e., basis vectors) to the low-rank matrices, leading to substantial parameter growth. In this paper, we find that these basis vectors exhibit significant parameter redundancy and can be compactly represented by lightweight nonlinear functions. Therefore, we propose Generative Low-Rank Adapter (GenLoRA), which replaces explicit basis vector storage with nonlinear basis vector generation. Specifically, GenLoRA maintains a latent vector for each low-rank matrix and employs a set of lightweight radial basis functions (RBFs) to synthesize the basis vectors. Each RBF requires far fewer parameters than an explicit basis vector, enabling higher parameter efficiency in GenLoRA. Extensive experiments across multiple datasets and architectures show that GenLoRA attains higher effective LoRA ranks under smaller parameter budgets, resulting in superior fine-tuning performance. The code is available at https://anonymous.4open.science/r/GenLoRA-1519.
Problem

Research questions and friction points this paper is trying to address.

Low-rank adaptation
parameter redundancy
model capacity
basis vectors
parameter efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-rank adaptation
Radial Basis Functions
Parameter Efficiency
Nonlinear Generation
Fine-tuning
🔎 Similar Papers
No similar papers found.