🤖 AI Summary
This work addresses the challenge of learning generic part features for open-world 3D shapes without predefined templates or textual supervision. We propose PartField, a feed-forward implicit 3D part feature field method. Its core innovation lies in enabling end-to-end learning of part feature fields without templates or text labels—achieved for the first time. To this end, we construct contrastive learning objectives via 2D/3D part proposal distillation and integrate continuous implicit field modeling with hierarchical clustering-based decoding to ensure cross-shape part semantic consistency. On standard part segmentation benchmarks, PartField achieves up to 20% accuracy improvement over prior methods, while accelerating inference by several orders of magnitude. Moreover, it natively supports emerging tasks—including co-segmentation and cross-shape correspondence—without architectural modification. These advances significantly enhance both the generalizability and practical applicability of 3D part understanding.
📝 Abstract
We propose PartField, a feedforward approach for learning part-based 3D features, which captures the general concept of parts and their hierarchy without relying on predefined templates or text-based names, and can be applied to open-world 3D shapes across various modalities. PartField requires only a 3D feedforward pass at inference time, significantly improving runtime and robustness compared to prior approaches. Our model is trained by distilling 2D and 3D part proposals from a mix of labeled datasets and image segmentations on large unsupervised datasets, via a contrastive learning formulation. It produces a continuous feature field which can be clustered to yield a hierarchical part decomposition. Comparisons show that PartField is up to 20% more accurate and often orders of magnitude faster than other recent class-agnostic part-segmentation methods. Beyond single-shape part decomposition, consistency in the learned field emerges across shapes, enabling tasks such as co-segmentation and correspondence, which we demonstrate in several applications of these general-purpose, hierarchical, and consistent 3D feature fields. Check our Webpage! https://research.nvidia.com/labs/toronto-ai/partfield-release/