🤖 AI Summary
This work addresses the challenging problem of reconstructing boundary-representation (B-rep) CAD models from a single RGB-D image under viewpoint-centered observation. We propose View-based B-Rep (VB-Rep), the first view-aware B-rep representation that explicitly encodes visibility constraints and geometric uncertainty—overcoming the limitation of existing methods that require complete, noise-free 3D inputs. Our method integrates panoramic image segmentation with depth-aware iterative geometric optimization, leveraging semantic priors to guide accurate boundary reconstruction and suppress geometric hallucinations and topological inaccuracies under partial observability. The output is a fully parametric, editable, and manufacturable B-rep model. Evaluated on both synthetic and real-world RGB-D datasets, our approach achieves significantly higher reconstruction fidelity, effectively bridging the semantic and geometric gap between real-world scenes and parametric CAD modeling.
📝 Abstract
Parametric CAD models, represented as Boundary Representations (B-reps), are foundational to modern design and manufacturing workflows, offering the precision and topological breakdown required for downstream tasks such as analysis, editing, and fabrication. However, B-Reps are often inaccessible due to conversion to more standardized, less expressive geometry formats. Existing methods to recover B-Reps from measured data require complete, noise-free 3D data, which are laborious to obtain. We alleviate this difficulty by enabling the precise reconstruction of CAD shapes from a single RGB-D image. We propose a method that addresses the challenge of reconstructing only the observed geometry from a single view. To allow for these partial observations, and to avoid hallucinating incorrect geometry, we introduce a novel view-centric B-rep (VB-Rep) representation, which incorporates structures to handle visibility limits and encode geometric uncertainty. We combine panoptic image segmentation with iterative geometric optimization to refine and improve the reconstruction process. Our results demonstrate high-quality reconstruction on synthetic and real RGB-D data, showing that our method can bridge the reality gap.