Scaling Continual Learning with Bi-Level Routing Mixture-of-Experts

πŸ“… 2026-02-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the core challenge in pre-trained model–based class-incremental continual learning: balancing stability and plasticity over extremely long task sequences comprising 100–300 non-overlapping tasks. To this end, we propose CaRE, a scalable framework built upon a bilayer routing mixture-of-experts (BR-MoE) architecture that dynamically selects task-relevant routers and expert modules to inject efficient, discriminative feature representations at intermediate network layers. CaRE is the first method capable of effectively modeling such ultra-long continual learning sequences, significantly outperforming existing baselines across multiple benchmarks. The results demonstrate its effectiveness in jointly optimizing task-specific adaptation and general-purpose feature representation, establishing new state-of-the-art performance in large-scale incremental learning scenarios.

Technology Category

Application Category

πŸ“ Abstract
Continual learning, especially class-incremental learning (CIL), on the basis of a pre-trained model (PTM) has garnered substantial research interest in recent years. However, how to effectively learn both discriminative and comprehensive feature representations while maintaining stability and plasticity over very long task sequences remains an open problem. We propose CaRE, a scalable {C}ontinual Le{a}rner with efficient Bi-Level {R}outing Mixture-of-{E}xperts (BR-MoE). The core idea of BR-MoE is a bi-level routing mechanism: a router selection stage that dynamically activates relevant task-specific routers, followed by an expert routing phase that dynamically activates and aggregates experts, aiming to inject discriminative and comprehensive representations into every intermediate network layer. On the other hand, we introduce a challenging evaluation protocol for comprehensively assessing CIL methods across very long task sequences spanning hundreds of tasks. Extensive experiments show that CaRE demonstrates leading performance across a variety of datasets and task settings, including commonly used CIL datasets with classical CIL settings (e.g., 5-20 tasks). To the best of our knowledge, CaRE is the first continual learner that scales to very long task sequences (ranging from 100 to over 300 non-overlapping tasks), while outperforming all baselines by a large margin on such task sequences. Code will be publicly released at https://github.com/LMMMEng/CaRE.git.
Problem

Research questions and friction points this paper is trying to address.

Continual Learning
Class-Incremental Learning
Stability-Plasticity Trade-off
Long Task Sequences
Feature Representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continual Learning
Mixture-of-Experts
Bi-Level Routing
Class-Incremental Learning
Scalability
πŸ”Ž Similar Papers
No similar papers found.