Seeing 3D Through 2D Lenses: 3D Few-Shot Class-Incremental Learning via Cross-Modal Geometric Rectification

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D class-incremental learning (CIL) methods suffer severe performance degradation under extreme few-shot settings, primarily due to geometric misalignment and texture bias—causing semantic ambiguity, prototype instability, and catastrophic forgetting. To address this, we propose a cross-modal geometric calibration framework, the first to integrate CLIP’s hierarchical spatial semantics into 3D few-shot incremental learning. Our approach employs attention-driven geometric fusion, intermediate-layer spatial prior alignment, structure-aware geometric correction, and a texture amplification module to jointly enhance geometric consistency and robustly model texture bias. Additionally, we introduce a base-class–novel-class discriminator to mitigate forgetting. Extensive experiments under both cross-domain and in-domain settings demonstrate significant improvements in classification accuracy and prototype stability. Our method establishes a scalable, few-shot incremental learning paradigm for open-world 3D content recognition.

Technology Category

Application Category

📝 Abstract
The rapid growth of 3D digital content necessitates expandable recognition systems for open-world scenarios. However, existing 3D class-incremental learning methods struggle under extreme data scarcity due to geometric misalignment and texture bias. While recent approaches integrate 3D data with 2D foundation models (e.g., CLIP), they suffer from semantic blurring caused by texture-biased projections and indiscriminate fusion of geometric-textural cues, leading to unstable decision prototypes and catastrophic forgetting. To address these issues, we propose Cross-Modal Geometric Rectification (CMGR), a framework that enhances 3D geometric fidelity by leveraging CLIP's hierarchical spatial semantics. Specifically, we introduce a Structure-Aware Geometric Rectification module that hierarchically aligns 3D part structures with CLIP's intermediate spatial priors through attention-driven geometric fusion. Additionally, a Texture Amplification Module synthesizes minimal yet discriminative textures to suppress noise and reinforce cross-modal consistency. To further stabilize incremental prototypes, we employ a Base-Novel Discriminator that isolates geometric variations. Extensive experiments demonstrate that our method significantly improves 3D few-shot class-incremental learning, achieving superior geometric coherence and robustness to texture bias across cross-domain and within-domain settings.
Problem

Research questions and friction points this paper is trying to address.

Addressing geometric misalignment in 3D few-shot class-incremental learning
Mitigating texture bias through cross-modal geometric rectification
Stabilizing decision prototypes to prevent catastrophic forgetting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-Modal Geometric Rectification framework
Structure-Aware Geometric Rectification module
Texture Amplification Module synthesizes discriminative textures
🔎 Similar Papers
No similar papers found.
X
Xiang Tuo
South China University of Technology
X
Xuemiao Xu
South China University of Technology, State Key Laboratory of Subtropical Building Science, Guangdong Provincial Key Lab of Computational Intelligence and Cyberspace Information, Ministry of Education Key Laboratory of Big Data and Intelligent Robot
Bangzhen Liu
Bangzhen Liu
City University of Hongkong
computer vision3D vision and generation
J
Jinyi Li
South China University of Technology
Y
Yong Li
South China University of Technology
Shengfeng He
Shengfeng He
Singapore Management University
Visual ComputingGenerative ModelsComputer VisionComputational PhotographyComputer Graphics