🤖 AI Summary
Existing parameter-efficient fine-tuning (PEFT) methods—e.g., LoRA—perform low-rank updates across the full parameter space, introducing redundancy. We observe that pretrained models possess nontrivial null spaces, which naturally serve as effective subspaces for low-rank adaptation. To exploit this property, we propose Null-Space-constrained Low-Rank Adaptation (ZS-LoRA), the first method to strictly constrain incremental updates to the model’s null space. ZS-LoRA integrates parameter freezing and subspace optimization via singular value decomposition and orthogonal null-space projection. This design enhances effective rank and parameter efficiency without increasing trainable parameters. Empirically, ZS-LoRA achieves state-of-the-art performance on cross-modal retrieval and visual question answering tasks using significantly fewer tunable parameters. Our results demonstrate that null-space-constrained adaptation simultaneously improves both training efficiency and generalization capability, validating the null space as a principled and underutilized resource for PEFT.
📝 Abstract
Parameter-efficient fine-tuning methods have gained considerable popularity for adapting large-scale models to downstream tasks, particularly LoRA and its variants. Existing methods perform low-rank adaptation over the full parameter space. However, fine-tuning within a subspace can achieve comparable effectiveness. Inspired by the observation that pre-trained models possess non-trivial null spaces, we propose Null-space based Low-Rank Adaptation (Null-LoRA). Null-LoRA effectively reduces redundancy and enhances effective rank by freezing portions of the low-rank matrices. To further improve parameter efficiency, Null-LoRA constrains the entire incremental update within the null space, maximizing the utilization of incremental updates to adapt to new task paradigms. Null-LoRA surpasses the state of the art with fewer parameters in extensive experiments across image-text retrieval and visual question answering tasks.