Resolving Conflicts in Lifelong Learning via Aligning Updates in Subspaces

📅 2025-11-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address catastrophic forgetting in continual learning caused by gradient direction conflicts across tasks when using Low-Rank Adaptation (LoRA), this paper proposes a subspace directional alignment method. It introduces dual regularization objectives during optimization to jointly constrain both the direction and magnitude of parameter updates. Furthermore, it designs a retraining-free magnitude-weighted adapter fusion strategy to explicitly preserve historical knowledge stability. This work is the first to integrate directional alignment, magnitude-aware adaptation, and low-rank parameterization into a unified LoRA-based continual learning framework. Evaluated on multiple NLP and vision continual learning benchmarks, the method significantly outperforms state-of-the-art approaches, effectively mitigating forgetting while enhancing cross-task representation stability and generalization capability.

Technology Category

Application Category

📝 Abstract
Low-Rank Adaptation (LoRA) enables efficient Continual Learning but often suffers from catastrophic forgetting due to destructive interference between tasks. Our analysis reveals that this degradation is primarily driven by antagonistic directional updates where new task gradients directly oppose the historical weight trajectory. To address this, we propose PS-LoRA (Parameter Stability LoRA), a framework designed to resolve conflicts by aligning updates within the optimization subspace. Our approach employs a dual-regularization objective that penalizes conflicting directions and constrains magnitude deviations to ensure consistency with prior knowledge. Additionally, we implement a magnitude-based merging strategy to consolidate sequential adapters into a robust representation without retraining. Experiments on NLP and Vision benchmarks show that PS-LoRA outperforms state-of-the-art methods by preserving the stability of learned representations while efficiently adapting to new domains.
Problem

Research questions and friction points this paper is trying to address.

Addresses catastrophic forgetting in continual learning with LoRA.
Resolves antagonistic gradient conflicts via subspace alignment.
Ensures stable adaptation across NLP and vision tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligning updates within subspaces to resolve conflicts
Dual-regularization penalizes conflicting directions and deviations
Magnitude-based merging consolidates adapters without retraining
🔎 Similar Papers
No similar papers found.