🤖 AI Summary
This work addresses the challenge of balancing stability and plasticity in few-shot class-incremental learning by proposing a static-dynamic collaborative framework. For the first time, the learning process is explicitly partitioned into two stages: a static retention phase and a dynamic learning phase. The former employs a static memory module to consolidate previously acquired knowledge, while the latter introduces a dynamic projector trained collaboratively to efficiently adapt to new classes. This approach achieves effective synergy between preserving old knowledge and learning new categories without altering the underlying conversational model architecture. Extensive experiments on three public benchmarks and a real-world dataset demonstrate that the proposed method significantly outperforms existing approaches, validating its effectiveness and generalization capability.
📝 Abstract
Few-shot class-incremental learning (FSCIL) aims to continuously recognize novel classes under limited data, which suffers from the key stability-plasticity dilemma: balancing the retention of old knowledge with the acquisition of new knowledge. To address this issue, we divide the task into two different stages and propose a framework termed Static-Dynamic Collaboration (SDC) to achieve a better trade-off between stability and plasticity. Specifically, our method divides the normal pipeline of FSCIL into Static Retaining Stage (SRS) and Dynamic Learning Stage (DLS), which harnesses old static and incremental dynamic class information, respectively. During SRS, we train an initial model with sufficient data in the base session and preserve the key part as static memory to retain fundamental old knowledge. During DLS, we introduce an extra dynamic projector jointly trained with the previous static memory. By employing both stages, our method achieves improved retention of old knowledge while continuously adapting to new classes. Extensive experiments on three public benchmarks and a real-world application dataset demonstrate that our method achieves state-of-the-art performance against other competitors.