🤖 AI Summary
Existing 3D class-incremental learning (CIL) methods suffer severe performance degradation under extreme few-shot settings, primarily due to geometric misalignment and texture bias—causing semantic ambiguity, prototype instability, and catastrophic forgetting. To address this, we propose a cross-modal geometric calibration framework, the first to integrate CLIP’s hierarchical spatial semantics into 3D few-shot incremental learning. Our approach employs attention-driven geometric fusion, intermediate-layer spatial prior alignment, structure-aware geometric correction, and a texture amplification module to jointly enhance geometric consistency and robustly model texture bias. Additionally, we introduce a base-class–novel-class discriminator to mitigate forgetting. Extensive experiments under both cross-domain and in-domain settings demonstrate significant improvements in classification accuracy and prototype stability. Our method establishes a scalable, few-shot incremental learning paradigm for open-world 3D content recognition.
📝 Abstract
The rapid growth of 3D digital content necessitates expandable recognition systems for open-world scenarios. However, existing 3D class-incremental learning methods struggle under extreme data scarcity due to geometric misalignment and texture bias. While recent approaches integrate 3D data with 2D foundation models (e.g., CLIP), they suffer from semantic blurring caused by texture-biased projections and indiscriminate fusion of geometric-textural cues, leading to unstable decision prototypes and catastrophic forgetting. To address these issues, we propose Cross-Modal Geometric Rectification (CMGR), a framework that enhances 3D geometric fidelity by leveraging CLIP's hierarchical spatial semantics. Specifically, we introduce a Structure-Aware Geometric Rectification module that hierarchically aligns 3D part structures with CLIP's intermediate spatial priors through attention-driven geometric fusion. Additionally, a Texture Amplification Module synthesizes minimal yet discriminative textures to suppress noise and reinforce cross-modal consistency. To further stabilize incremental prototypes, we employ a Base-Novel Discriminator that isolates geometric variations. Extensive experiments demonstrate that our method significantly improves 3D few-shot class-incremental learning, achieving superior geometric coherence and robustness to texture bias across cross-domain and within-domain settings.