π€ AI Summary
This work addresses the stability-plasticity dilemma in generalized few-shot 3D point cloud segmentation, where models struggle to retain base-class knowledge while adapting to novel classes. To this end, the authors propose HOP3D, a unified framework that decouples the learning of base and novel classes through hierarchical orthogonal prototype learning at both gradient and representation levels. Furthermore, an entropy regularizer guided by prediction uncertainty is introduced to refine few-shot prototypes. Extensive experiments on ScanNet200 and ScanNet++ benchmarks demonstrate that HOP3D consistently outperforms state-of-the-art methods under both 1-shot and 5-shot settings, effectively balancing base-class performance retention with robust adaptation to novel classes.
π Abstract
Generalized few-shot 3D point cloud segmentation aims to adapt to novel classes from only a few annotations while maintaining strong performance on base classes, but this remains challenging due to the inherent stability-plasticity trade-off: adapting to novel classes can interfere with shared representations and cause base-class forgetting. We present HOP3D, a unified framework that learns hierarchical orthogonal prototypes with an entropy-based few-shot regularizer to enable robust novel-class adaptation without degrading base-class performance. HOP3D introduces hierarchical orthogonalization that decouples base and novel learning at both the gradient and representation levels, effectively mitigating base-novel interference. To further enhance adaptation under sparse supervision, we incorporate an entropy-based regularizer that leverages predictive uncertainty to refine prototype learning and promote balanced predictions. Extensive experiments on ScanNet200 and ScanNet++ demonstrate that HOP3D consistently outperforms state-of-the-art baselines under both 1-shot and 5-shot settings. The code is available at https://fdueblab-hop3d.github.io/.