๐ค AI Summary
Few-Shot Class-Incremental Learning (FSCIL) suffers from knowledge conflict between adapting to novel classes and retaining previously learned knowledge. Existing forward-looking space construction methods are constrained by prototype bias and structural rigidity, resulting in suboptimal embedding expressiveness. To address this, we propose a consistency-driven calibration and matching framework. First, we introduce a memory-aware prototype calibration mechanism to enhance conceptual consistency of class centroids in the feature space. Second, we design a dynamic structural matching module that adaptively aligns each incremental session with its optimal manifold spaceโwithout requiring pre-specified numbers of classes. Inspired by hippocampal associative memory, our method jointly optimizes both feature-level and geometric-structural consistency. Extensive experiments on mini-ImageNet and CUB200 demonstrate state-of-the-art performance, achieving absolute improvements of +3.20% and +3.68% in incremental session harmonic accuracy, respectively.
๐ Abstract
Few-Shot Class-Incremental Learning (FSCIL) requires models to adapt to novel classes with limited supervision while preserving learned knowledge. Existing prospective learning-based space construction methods reserve space to accommodate novel classes. However, prototype deviation and structure fixity limit the expressiveness of the embedding space. In contrast to fixed space reservation, we explore the optimization of feature-structure dual consistency and propose a Consistency-driven Calibration and Matching Framework (ConCM) that systematically mitigate the knowledge conflict inherent in FSCIL. Specifically, inspired by hippocampal associative memory, we design a memory-aware prototype calibration that extracts generalized semantic attributes from base classes and reintegrates them into novel classes to enhance the conceptual center consistency of features. Further, we propose dynamic structure matching, which adaptively aligns the calibrated features to a session-specific optimal manifold space, ensuring cross-session structure consistency. Theoretical analysis shows that our method satisfies both geometric optimality and maximum matching, thereby overcoming the need for class-number priors. On large-scale FSCIL benchmarks including mini-ImageNet and CUB200, ConCM achieves state-of-the-art performance, surpassing current optimal method by 3.20% and 3.68% in harmonic accuracy of incremental sessions.