SegviGen: Repurposing 3D Generative Model for Part Segmentation

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing 3D part segmentation methods, which typically rely on large-scale annotated data and suffer from cross-view inconsistency and ambiguous boundaries. The authors propose the first approach to repurpose pretrained native 3D generative models for segmentation by leveraging part-specific coloring and voxel-level geometric alignment during reconstruction. Requiring only 0.32% of the standard annotation budget, the method supports interactive, fully automatic, and 2D-guided segmentation modes. It achieves performance gains of 40% and 15% over state-of-the-art techniques in interactive and fully automatic settings, respectively, thereby overcoming the dependence on extensive labeled data and multi-view fusion.

Technology Category

Application Category

📝 Abstract
We introduce SegviGen, a framework that repurposes native 3D generative models for 3D part segmentation. Existing pipelines either lift strong 2D priors into 3D via distillation or multi-view mask aggregation, often suffering from cross-view inconsistency and blurred boundaries, or explore native 3D discriminative segmentation, which typically requires large-scale annotated 3D data and substantial training resources. In contrast, SegviGen leverages the structured priors encoded in pretrained 3D generative model to induce segmentation through distinctive part colorization, establishing a novel and efficient framework for part segmentation. Specifically, SegviGen encodes a 3D asset and predicts part-indicative colors on active voxels of a geometry-aligned reconstruction. It supports interactive part segmentation, full segmentation, and full segmentation with 2D guidance in a unified framework. Extensive experiments show that SegviGen improves over the prior state of the art by 40% on interactive part segmentation and by 15% on full segmentation, while using only 0.32% of the labeled training data. It demonstrates that pretrained 3D generative priors transfer effectively to 3D part segmentation, enabling strong performance with limited supervision. See our project page at https://fenghora.github.io/SegviGen-Page/.
Problem

Research questions and friction points this paper is trying to address.

3D part segmentation
cross-view inconsistency
limited supervision
annotated 3D data
blurred boundaries
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D generative model repurposing
part segmentation
structured priors
distinctive colorization
limited supervision
🔎 Similar Papers
No similar papers found.
L
Lin Li
Beihang University
Haoran Feng
Haoran Feng
Tsinghua University
Computer vision
Zehuan Huang
Zehuan Huang
Beihang University
Generative ModelComputer Vision
H
Haohua Chen
Beihang University
W
Wenbo Nie
Beihang University
S
Shaohua Hou
Beihang University
K
Keqing Fan
Beihang University
P
Pan Hu
Beihang University
S
Sheng Wang
OriginArk
B
Buyu Li
OriginArk
Lu Sheng
Lu Sheng
School of Software, Beihang University
Embodied AI3D VisionMachine Learning