Is Parameter Isolation Better for Prompt-Based Continual Learning?

πŸ“… 2026-01-28
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitations of existing prompt-based continual learning methods, which rely on fixed prompt sets to isolate task-specific knowledge, often resulting in poor parameter efficiency and catastrophic forgetting. To overcome these issues, the authors propose a dynamic prompt sharing framework that maintains a global prompt pool and employs a task-aware gating routing mechanism to dynamically activate relevant subsets of prompts for each task. Additionally, a history-aware modulator is introduced to protect frequently reused prompts from being overwritten, thereby enabling both disentanglement and collaborative optimization of task representations. By moving beyond the static prompt allocation paradigm, the proposed method achieves state-of-the-art performance across multiple continual learning benchmarks, demonstrating superior accuracy and parameter efficiency compared to existing approaches.

Technology Category

Application Category

πŸ“ Abstract
Prompt-based continual learning methods effectively mitigate catastrophic forgetting. However, most existing methods assign a fixed set of prompts to each task, completely isolating knowledge across tasks and resulting in suboptimal parameter utilization. To address this, we consider the practical needs of continual learning and propose a prompt-sharing framework. This framework constructs a global prompt pool and introduces a task-aware gated routing mechanism that sparsely activates a subset of prompts to achieve dynamic decoupling and collaborative optimization of task-specific feature representations. Furthermore, we introduce a history-aware modulator that leverages cumulative prompt activation statistics to protect frequently used prompts from excessive updates, thereby mitigating inefficient parameter usage and knowledge forgetting. Extensive analysis and empirical results demonstrate that our approach consistently outperforms existing static allocation strategies in effectiveness and efficiency.
Problem

Research questions and friction points this paper is trying to address.

prompt-based continual learning
catastrophic forgetting
parameter isolation
prompt allocation
knowledge sharing
Innovation

Methods, ideas, or system contributions that make the work stand out.

prompt-sharing
task-aware gated routing
history-aware modulator
continual learning
parameter isolation
πŸ”Ž Similar Papers