Budget-Adaptive Adapter Tuning in Orthogonal Subspaces for Continual Learning in LLMs

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address catastrophic forgetting in large language models (LLMs) during continual learning—where performance on previously learned tasks sharply degrades—this paper proposes the Orthogonal Adapter with budget-adaptive orthogonal subspaces (OA-Adapter), an end-to-end unified optimization framework. OA-Adapter jointly integrates dynamic bottleneck dimension adaptation and orthogonal subspace learning within a single training stage, departing from conventional multi-stage, decoupled paradigms. It enables layer- and task-granular dynamic allocation of parameter budgets and introduces a task-aware orthogonal constraint to explicitly mitigate inter-task interference. Evaluated on standard continual learning benchmarks, OA-Adapter achieves state-of-the-art accuracy while reducing parameter count by 58.5% compared to prior methods, substantially advancing the Pareto frontier of accuracy versus parameter efficiency.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) often suffer from catastrophic forgetting in continual learning (CL) scenarios, where performance on previously learned tasks degrades severely while training on sequentially arriving tasks. Although pioneering CL approaches using orthogonal subspaces can mitigate task interference, they typically employ fixed budget allocation, neglecting the varying complexity across tasks and layers. Besides, recent budget-adaptive tuning methods for LLMs often adopt multi-stage paradigms that decouple optimization and budget allocation. Such decoupling results in potential misalignment, which hinders those approaches' practical application in CL scenarios. To address these limitations, we propose OA-Adapter, a novel parameter-efficient approach for continual learning in LLMs that unifies dynamic budget adaptation with orthogonal subspace learning in a single end-to-end training stage. Specifically, OA-Adapter introduces a dynamic bottleneck dimension adaptation mechanism that simultaneously allocates an efficient parameter budget and optimizes task objectives without misalignment. To effectively preserve previously acquired knowledge while coordinating with the dynamic budget allocation, orthogonal constraints are applied specifically between the parameter subspace of the current task and the dynamically allocated parameter subspaces of historical tasks. Experimental results on continual learning benchmarks demonstrate that OA-Adapter outperforms state-of-the-art methods in both accuracy and parameter efficiency, achieving higher average accuracy while using 58.5% fewer parameters on the standard CL benchmark.
Problem

Research questions and friction points this paper is trying to address.

Mitigate catastrophic forgetting in continual learning for LLMs
Dynamic budget adaptation for varying task and layer complexity
Unify budget allocation and orthogonal subspace learning end-to-end
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic budget adaptation for parameter efficiency
Orthogonal subspace learning to prevent forgetting
End-to-end training for aligned optimization
🔎 Similar Papers
Z
Zhiyi Wan
Beijing University of Posts and Telecommunications
W
Wanrou Du
Beijing University of Posts and Telecommunications
L
Liang Li
Pengcheng Laboratory
Miao Pan
Miao Pan
Professor, Electrical and Computer Engineering, University of Houston
Wireless for AICybersecurity for AIMobile/Edge AI SystemsUnderwater IoT Nets
Xiaoqi Qin
Xiaoqi Qin
Beijing University of Posts and Telecommunications