LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation

📅 2025-04-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address severe cross-task parameter interference, redundant trainable parameters, and challenges in continual learning within multi-task LoRA fine-tuning, this paper proposes OrthoLoRA. OrthoLoRA freezes the random projection matrix A and trains only matrix B equipped with task-specific sparse masks. Crucially, it is the first method to jointly enforce orthogonal subspace constraints and structured sparse masking, explicitly isolating task representation subspaces. This design substantially mitigates interference, enables efficient adapter merging, and supports incremental continual learning while effectively alleviating catastrophic forgetting. Evaluated across four diverse domains—natural language understanding, mathematical reasoning, code generation, and safety alignment—OrthoLoRA achieves superior performance using only 5% of the LoRA parameters (i.e., a 95% reduction in total tunable parameters) compared to standard LoRA, outperforming both full fine-tuning and state-of-the-art PEFT methods. These results demonstrate its dual advantages in parameter efficiency and generalization capability.

Technology Category

Application Category

📝 Abstract
Low-Rank Adaptation (LoRA) has emerged as a popular parameter-efficient fine-tuning (PEFT) method for Large Language Models (LLMs), yet it still incurs notable overhead and suffers from parameter interference in multi-task scenarios. We propose LoRA with Reduced Interference (LoRI), a simple yet effective approach that freezes the projection matrices $A$ as random projections and sparsifies the matrices $B$ using task-specific masks. This design substantially reduces the number of trainable parameters while maintaining strong task performance. Moreover, LoRI minimizes cross-task interference in adapter merging by leveraging the orthogonality between adapter subspaces, and supports continual learning by using sparsity to mitigate catastrophic forgetting. Extensive experiments across natural language understanding, mathematical reasoning, code generation, and safety alignment tasks demonstrate that LoRI outperforms full fine-tuning and existing PEFT methods, while using up to 95% fewer trainable parameters than LoRA. In multi-task experiments, LoRI enables effective adapter merging and continual learning with reduced cross-task interference. Code is available at: https://github.com/juzhengz/LoRI
Problem

Research questions and friction points this paper is trying to address.

Reduces parameter interference in multi-task LoRA adaptation
Minimizes cross-task interference via orthogonal adapter subspaces
Supports continual learning by sparsity to prevent forgetting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Freezes projection matrices as random projections
Sparsifies matrices with task-specific masks
Leverages orthogonality to reduce cross-task interference
🔎 Similar Papers
No similar papers found.