🤖 AI Summary
Existing methods for reconstructing CAD boundary representations (B-reps) from multi-view images rely heavily on dense, clean point clouds and exhibit limited generalization. This work proposes BrepGaussian, the first framework to integrate Gaussian splatting rendering into B-rep reconstruction. It employs a two-stage learning strategy: first recovering geometric structures and edge lines, then refining surface parametrization features, thereby enabling end-to-end 2D-to-B-rep reconstruction. By decoupling geometry recovery from feature learning, the approach significantly enhances generalization to novel shapes and improves geometric consistency. Experimental results demonstrate that BrepGaussian outperforms current state-of-the-art methods across multiple metrics, producing cleaner and more consistent CAD models. The authors will release the code and dataset to facilitate further research.
📝 Abstract
The boundary representation (B-rep) models a 3D solid as its explicit boundaries: trimmed corners, edges, and faces. Recovering B-rep representation from unstructured data is a challenging and valuable task of computer vision and graphics. Recent advances in deep learning have greatly improved the recovery of 3D shape geometry, but still depend on dense and clean point clouds and struggle to generalize to novel shapes. We propose B-rep Gaussian Splatting (BrepGaussian), a novel framework that learns 3D parametric representations from 2D images. We employ a Gaussian Splatting renderer with learnable features, followed by a specific fitting strategy. To disentangle geometry reconstruction and feature learning, we introduce a two-stage learning framework that first captures geometry and edges and then refines patch features to achieve clean geometry and coherent instance representations. Extensive experiments demonstrate the superior performance of our approach to state-of-the-art methods. We will release our code and datasets upon acceptance.