PatchAlign3D: Local Feature Alignment for Dense 3D Shape understanding

📅 2026-01-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited transferability of existing 3D foundation models in local part understanding tasks, which often rely on multi-view rendering and large language model (LLM) prompting while neglecting intrinsic 3D geometric structure. The authors propose a novel encoder-only 3D point cloud model that, through a two-stage pretraining strategy, directly generates local features aligned with textual semantics—enabling, for the first time, zero-shot part segmentation via a single forward pass without multi-view rendering or LLM prompts. Built upon a point cloud Transformer, the method integrates DINOv2-based dense feature distillation with multi-positive contrastive learning to effectively align 3D local geometry with part-level text embeddings. Experiments demonstrate that the model significantly outperforms current approaches across multiple 3D part segmentation benchmarks, achieving both high efficiency and accuracy.

Technology Category

Application Category

📝 Abstract
Current foundation models for 3D shapes excel at global tasks (retrieval, classification) but transfer poorly to local part-level reasoning. Recent approaches leverage vision and language foundation models to directly solve dense tasks through multi-view renderings and text queries. While promising, these pipelines require expensive inference over multiple renderings, depend heavily on large language-model (LLM) prompt engineering for captions, and fail to exploit the inherent 3D geometry of shapes. We address this gap by introducing an encoder-only 3D model that produces language-aligned patch-level features directly from point clouds. Our pre-training approach builds on existing data engines that generate part-annotated 3D shapes by pairing multi-view SAM regions with VLM captioning. Using this data, we train a point cloud transformer encoder in two stages: (1) distillation of dense 2D features from visual encoders such as DINOv2 into 3D patches, and (2) alignment of these patch embeddings with part-level text embeddings through a multi-positive contrastive objective. Our 3D encoder achieves zero-shot 3D part segmentation with fast single-pass inference without any test-time multi-view rendering, while significantly outperforming previous rendering-based and feed-forward approaches across several 3D part segmentation benchmarks. Project website: https://souhail-hadgi.github.io/patchalign3dsite/
Problem

Research questions and friction points this paper is trying to address.

3D part segmentation
local feature alignment
dense 3D understanding
foundation models
point cloud
Innovation

Methods, ideas, or system contributions that make the work stand out.

PatchAlign3D
3D part segmentation
point cloud transformer
feature distillation
contrastive alignment
🔎 Similar Papers
No similar papers found.
S
Souhail Hadgi
École polytechnique
B
Bingchen Gong
École polytechnique
R
Ramanathan Sundararaman
École polytechnique
E
Emery Pierson
École polytechnique
Lei Li
Lei Li
University of Virginia
Computer VisionComputer Graphics
Peter Wonka
Peter Wonka
King Abdullah University of Science and Technology (KAUST)
Deep LearningComputer VisionComputer GraphicsMachine LearningRemote Sensing
M
M. Ovsjanikov
École polytechnique