InTAct: Interval-based Task Activation Consolidation for Continual Learning

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Continual learning under domain shift—where input distributions change while label spaces remain fixed—is prone to catastrophic forgetting due to representation drift, especially in prompt-based methods. To address this, we propose Task-Activated Interval Consistency (TAIC), a dynamic regularization framework that constrains the historical activation intervals of critical neurons in shared layers, thereby preserving functional stability without parameter freezing or replaying historical data. TAIC is seamlessly integrated into existing prompt-based architectures as an orthogonal module, requiring no architectural modifications. By explicitly suppressing feature overwriting, it strikes an effective balance between stability and plasticity. Evaluated on domain-incremental benchmarks—including DomainNet and ImageNet-R—TAIC significantly mitigates representation drift, achieving an average accuracy improvement of 8.0% over prior state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Continual learning aims to enable neural networks to acquire new knowledge without forgetting previously learned information. While recent prompt-based methods perform strongly in class-incremental settings, they remain vulnerable under domain shifts, where the input distribution changes but the label space remains fixed. This exposes a persistent problem known as representation drift. Shared representations evolve in ways that overwrite previously useful features and cause forgetting even when prompts isolate task-specific parameters. To address this issue, we introduce InTAct, a method that preserves functional behavior in shared layers without freezing parameters or storing past data. InTAct captures the characteristic activation ranges associated with previously learned tasks and constrains updates to ensure the network remains consistent within these regions, while still allowing for flexible adaptation elsewhere. In doing so, InTAct stabilizes the functional role of important neurons rather than directly restricting parameter values. The approach is architecture-agnostic and integrates seamlessly into existing prompt-based continual learning frameworks. By regulating representation changes where past knowledge is encoded, InTAct achieves a principled balance between stability and plasticity. Across diverse domain-incremental benchmarks, including DomainNet and ImageNet-R, InTAct consistently reduces representation drift and improves performance, increasing Average Accuracy by up to 8 percentage points over state-of-the-art baselines.
Problem

Research questions and friction points this paper is trying to address.

Addresses representation drift in continual learning during domain shifts
Preserves functional behavior in shared layers without freezing parameters
Stabilizes important neurons while allowing flexible adaptation elsewhere
Innovation

Methods, ideas, or system contributions that make the work stand out.

Preserves functional behavior in shared layers
Constrains updates within characteristic activation ranges
Stabilizes neurons' functional role without freezing parameters
🔎 Similar Papers
2024-01-22IEEE International Conference on Acoustics, Speech, and Signal ProcessingCitations: 3