Task-Driven Subspace Decomposition for Knowledge Sharing and Isolation in LoRA-based Continual Learning

📅 2026-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in existing LoRA-based continual learning methods of balancing knowledge sharing and isolation across tasks, which often leads to insufficient transfer or severe interference. To resolve this, the authors propose Low-rank Decomposition with Adaptation (LoDA), a novel approach that decouples shared and task-specific knowledge by decomposing LoRA weights into orthogonal subspaces guided by task-driven projection energy analysis. LoDA uniquely integrates energy-aware subspace decomposition, Gradient Alignment Optimization (GAO), and a closed-form calibration for the shared component updates. Extensive experiments on multiple continual learning benchmarks demonstrate that LoDA significantly outperforms current state-of-the-art methods, effectively enhancing knowledge transfer while mitigating catastrophic interference across sequential tasks.

Technology Category

Application Category

📝 Abstract
Continual Learning (CL) requires models to sequentially adapt to new tasks without forgetting old knowledge. Recently, Low-Rank Adaptation (LoRA), a representative Parameter-Efficient Fine-Tuning (PEFT) method, has gained increasing attention in CL. Several LoRA-based CL methods reduce interference across tasks by separating their update spaces, typically building the new space from the estimated null space of past tasks. However, they (i) overlook task-shared directions, which suppresses knowledge transfer, and (ii) fail to capture truly effective task-specific directions since these ``null bases" of old tasks can remain nearly inactive for new task under correlated tasks. To address this, we study LoRA learning capability from a projection energy perspective, and propose Low-rank Decomposition and Adaptation (LoDA). It performs a task-driven decomposition to build general and truly task-specific LoRA subspaces by solving two energy-based objectives, decoupling directions for knowledge sharing and isolation. LoDA fixes LoRA down-projections on two subspaces and learns robust up-projections via a Gradient-Aligned Optimization (GAO) approach. After each task, before integrating the LoRA updates into the backbone, LoDA derives a closed-form recalibration for the general update, approximating a feature-level joint optimum along this task-shared direction. Experiments indicate that LoDA outperforms existing CL methods.
Problem

Research questions and friction points this paper is trying to address.

Continual Learning
Low-Rank Adaptation
Knowledge Sharing
Task Interference
Subspace Decomposition
Innovation

Methods, ideas, or system contributions that make the work stand out.

LoRA
Continual Learning
Subspace Decomposition
Knowledge Sharing
Gradient-Aligned Optimization
🔎 Similar Papers
No similar papers found.