Null-LoRA: Low-Rank Adaptation on Null Space

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing parameter-efficient fine-tuning (PEFT) methods—e.g., LoRA—perform low-rank updates across the full parameter space, introducing redundancy. We observe that pretrained models possess nontrivial null spaces, which naturally serve as effective subspaces for low-rank adaptation. To exploit this property, we propose Null-Space-constrained Low-Rank Adaptation (ZS-LoRA), the first method to strictly constrain incremental updates to the model’s null space. ZS-LoRA integrates parameter freezing and subspace optimization via singular value decomposition and orthogonal null-space projection. This design enhances effective rank and parameter efficiency without increasing trainable parameters. Empirically, ZS-LoRA achieves state-of-the-art performance on cross-modal retrieval and visual question answering tasks using significantly fewer tunable parameters. Our results demonstrate that null-space-constrained adaptation simultaneously improves both training efficiency and generalization capability, validating the null space as a principled and underutilized resource for PEFT.

Technology Category

Application Category

📝 Abstract
Parameter-efficient fine-tuning methods have gained considerable popularity for adapting large-scale models to downstream tasks, particularly LoRA and its variants. Existing methods perform low-rank adaptation over the full parameter space. However, fine-tuning within a subspace can achieve comparable effectiveness. Inspired by the observation that pre-trained models possess non-trivial null spaces, we propose Null-space based Low-Rank Adaptation (Null-LoRA). Null-LoRA effectively reduces redundancy and enhances effective rank by freezing portions of the low-rank matrices. To further improve parameter efficiency, Null-LoRA constrains the entire incremental update within the null space, maximizing the utilization of incremental updates to adapt to new task paradigms. Null-LoRA surpasses the state of the art with fewer parameters in extensive experiments across image-text retrieval and visual question answering tasks.
Problem

Research questions and friction points this paper is trying to address.

Reduces redundancy in parameter-efficient fine-tuning
Enhances effective rank by freezing low-rank matrices
Constrains updates within null space for efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Performs low-rank adaptation in null space
Freezes portions of low-rank matrices to reduce redundancy
Constrains incremental updates within null space for efficiency
🔎 Similar Papers
No similar papers found.
Y
Yi Zhang
School of Computer Science and Engineering, Sun Yat-sen University, Guangdong, China
Y
Yulei Kang
School of Computer Science and Engineering, Sun Yat-sen University, Guangdong, China
Haoxuan Chen
Haoxuan Chen
PhD Candidate at ICME, Stanford University
Applied and Computational MathematicsStatisticsMachine LearningScientific Computing
J
Jinxuan Li
School of Computer Science and Engineering, Sun Yat-sen University, Guangdong, China
Jian-Fang Hu
Jian-Fang Hu
Sun Yat-sen University
Computer Vision and Machine Learning