SegVGGT: Joint 3D Reconstruction and Instance Segmentation from Multi-View Images

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D instance segmentation methods rely on high-quality point clouds or pose-aligned RGB-D data, resulting in complex pipelines that are sensitive to reconstruction noise and struggle to effectively fuse geometric and semantic information. This work proposes SegVGGT, the first end-to-end framework that integrates instance segmentation into a Vision Geometry Transformer, enabling joint feed-forward 3D reconstruction and instance segmentation directly from multi-view RGB images. By leveraging object queries that interact with multi-level geometric features and introducing a Frame-wise Attention Distribution Alignment (FADA) strategy to mitigate attention dispersion caused by global tokens—without additional inference overhead—SegVGGT achieves state-of-the-art performance on ScanNetv2 and ScanNet200, significantly outperforming existing joint models and RGB-D approaches, while also demonstrating strong generalization on ScanNet++.

Technology Category

Application Category

📝 Abstract
3D instance segmentation methods typically rely on high-quality point clouds or posed RGB-D scans, requiring complex multi-stage processing pipelines, and are highly sensitive to reconstruction noise. While recent feed-forward transformers have revolutionized multi-view 3D reconstruction, they remain decoupled from high-level semantic understanding. In this work, we present SegVGGT, a unified end-to-end framework that simultaneously performs feed-forward 3D reconstruction and instance segmentation directly from multi-view RGB images. By introducing object queries that interact with multi-level geometric features, our method deeply integrates instance identification into the visual geometry grounded transformer. To address the severe attention dispersion problem caused by the massive number of global image tokens, we propose the Frame-level Attention Distribution Alignment (FADA) strategy. FADA explicitly guides object queries to attend to instance-relevant frames during training, providing structured supervision without extra inference overhead. Extensive experiments demonstrate that SegVGGT achieves the state-of-the-art performance on ScanNetv2 and ScanNet200, outperforming both recent joint models and RGB-D-based approaches, while exhibiting strong generalization capabilities on ScanNet++.
Problem

Research questions and friction points this paper is trying to address.

3D instance segmentation
multi-view 3D reconstruction
RGB-D scans
semantic understanding
reconstruction noise
Innovation

Methods, ideas, or system contributions that make the work stand out.

SegVGGT
joint 3D reconstruction and instance segmentation
object queries
Frame-level Attention Distribution Alignment
multi-view RGB images
🔎 Similar Papers
No similar papers found.