View2CAD: Reconstructing View-Centric CAD Models from Single RGB-D Scans

📅 2025-04-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenging problem of reconstructing boundary-representation (B-rep) CAD models from a single RGB-D image under viewpoint-centered observation. We propose View-based B-Rep (VB-Rep), the first view-aware B-rep representation that explicitly encodes visibility constraints and geometric uncertainty—overcoming the limitation of existing methods that require complete, noise-free 3D inputs. Our method integrates panoramic image segmentation with depth-aware iterative geometric optimization, leveraging semantic priors to guide accurate boundary reconstruction and suppress geometric hallucinations and topological inaccuracies under partial observability. The output is a fully parametric, editable, and manufacturable B-rep model. Evaluated on both synthetic and real-world RGB-D datasets, our approach achieves significantly higher reconstruction fidelity, effectively bridging the semantic and geometric gap between real-world scenes and parametric CAD modeling.

Technology Category

Application Category

📝 Abstract
Parametric CAD models, represented as Boundary Representations (B-reps), are foundational to modern design and manufacturing workflows, offering the precision and topological breakdown required for downstream tasks such as analysis, editing, and fabrication. However, B-Reps are often inaccessible due to conversion to more standardized, less expressive geometry formats. Existing methods to recover B-Reps from measured data require complete, noise-free 3D data, which are laborious to obtain. We alleviate this difficulty by enabling the precise reconstruction of CAD shapes from a single RGB-D image. We propose a method that addresses the challenge of reconstructing only the observed geometry from a single view. To allow for these partial observations, and to avoid hallucinating incorrect geometry, we introduce a novel view-centric B-rep (VB-Rep) representation, which incorporates structures to handle visibility limits and encode geometric uncertainty. We combine panoptic image segmentation with iterative geometric optimization to refine and improve the reconstruction process. Our results demonstrate high-quality reconstruction on synthetic and real RGB-D data, showing that our method can bridge the reality gap.
Problem

Research questions and friction points this paper is trying to address.

Reconstruct CAD models from single RGB-D scans
Address partial geometry observation challenges
Introduce view-centric B-rep for uncertainty handling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reconstruct CAD from single RGB-D image
Introduce view-centric B-rep representation
Combine segmentation with geometric optimization
🔎 Similar Papers
No similar papers found.
J
James Noeckel
University of Washington, Seattle, WA, USA
B
Benjamin Jones
Massachusetts Institute of Technology, Cambridge, MA, USA
Adriana Schulz
Adriana Schulz
University of Washington
Brian Curless
Brian Curless
Professor of Computer Science & Engineering, University of Washington
Computer GraphicsComputer VisionHuman-Computer Interaction