Surface-Aware Distilled 3D Semantic Features

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Semantic ambiguity in 3D shape correspondence—e.g., left/right hand confusion—leads to erroneous mappings. Existing methods rely on pre-trained visual semantic features, which often conflate semantically similar instances and fail to resolve geometric ambiguities among distant surface points. Method: We propose a self-supervised, surface-aware embedding framework that introduces a novel surface-aware contrastive loss. This loss preserves semantic consistency while explicitly disentangling ambiguities arising from non-local geometric configurations. The method requires only a small set of unpaired 3D meshes for generalization to unseen shapes, eliminating the need for manual annotations or paired data. Technically, it integrates knowledge distillation for 3D semantic feature extraction, explicit mesh surface geometry modeling, and ambiguity-aware optimization in feature space. Results: Our approach achieves state-of-the-art performance on standard correspondence benchmarks and robustly supports downstream tasks including part segmentation, pose alignment, and motion transfer.

Technology Category

Application Category

📝 Abstract
Many 3D tasks such as pose alignment, animation, motion transfer, and 3D reconstruction rely on establishing correspondences between 3D shapes. This challenge has recently been approached by matching of semantic features from pre-trained vision models. However, despite their power, these features struggle to differentiate instances of the same semantic class such as"left hand"versus"right hand"which leads to substantial mapping errors. To solve this, we learn a surface-aware embedding space that is robust to these ambiguities. Importantly, our approach is self-supervised and requires only a small number of unpaired training meshes to infer features for new 3D shapes at test time. We achieve this by introducing a contrastive loss that preserves the semantic content of the features distilled from foundational models while disambiguating features located far apart on the shape's surface. We observe superior performance in correspondence matching benchmarks and enable downstream applications including in-part segmentation, pose alignment, and motion transfer. The project site is available at https://lukas.uzolas.com/SurfaceAware3DFeaturesSite.
Problem

Research questions and friction points this paper is trying to address.

Differentiate same-class 3D instances (e.g., left vs right hand)
Learn self-supervised surface-aware embedding for 3D shapes
Improve correspondence matching for segmentation and motion tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Surface-aware embedding space for 3D shapes
Self-supervised learning with contrastive loss
Distilled semantic features from foundational models
🔎 Similar Papers
No similar papers found.