Faithful Contouring: Near-Lossless 3D Voxel Representation Free from Iso-surface

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing isosurface-based 3D mesh voxelization methods rely on watertightening or rendering optimizations, compromising geometric fidelity and internal structure preservation. This paper proposes a sparse voxelization representation that bypasses isosurface extraction, implicit field fitting, and remeshing, enabling high-resolution, near-lossless encoding of arbitrary (non-watertight) meshes. Our core innovations are a silhouette-preserving voxel encoding strategy and a dual-mode autoencoder architecture, which jointly ensure sharp feature retention, explicit modeling of interior geometry, and topology-flexible manipulation. Experiments demonstrate that our direct representation achieves signed distance errors on the order of 10⁻⁵; in reconstruction tasks, it reduces Chamfer Distance by 93% and improves F-score by 35% over state-of-the-art methods, significantly advancing fidelity, structural accuracy, and topological expressiveness.

Technology Category

Application Category

📝 Abstract
Accurate and efficient voxelized representations of 3D meshes are the foundation of 3D reconstruction and generation. However, existing representations based on iso-surface heavily rely on water-tightening or rendering optimization, which inevitably compromise geometric fidelity. We propose Faithful Contouring, a sparse voxelized representation that supports 2048+ resolutions for arbitrary meshes, requiring neither converting meshes to field functions nor extracting the isosurface during remeshing. It achieves near-lossless fidelity by preserving sharpness and internal structures, even for challenging cases with complex geometry and topology. The proposed method also shows flexibility for texturing, manipulation, and editing. Beyond representation, we design a dual-mode autoencoder for Faithful Contouring, enabling scalable and detail-preserving shape reconstruction. Extensive experiments show that Faithful Contouring surpasses existing methods in accuracy and efficiency for both representation and reconstruction. For direct representation, it achieves distance errors at the $10^{-5}$ level; for mesh reconstruction, it yields a 93% reduction in Chamfer Distance and a 35% improvement in F-score over strong baselines, confirming superior fidelity as a representation for 3D learning tasks.
Problem

Research questions and friction points this paper is trying to address.

Achieving near-lossless 3D voxel representation without iso-surface extraction
Preserving geometric fidelity for complex shapes with sharp features
Enabling high-resolution representation and reconstruction of arbitrary meshes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse voxel representation without iso-surface extraction
Preserves sharpness and internal structures near-losslessly
Dual-mode autoencoder enables scalable shape reconstruction
🔎 Similar Papers
No similar papers found.