GeoSVR: Taming Sparse Voxels for Geometrically Accurate Surface Reconstruction

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of simultaneously achieving high geometric accuracy, completeness, and fine-detail fidelity in sparse-voxel surface reconstruction, this paper proposes GeoSVR—an explicit volumetric framework. Our method integrates differentiable rendering with explicit voxel optimization while introducing two key innovations: (1) uncertainty-aware depth supervision, which leverages per-pixel uncertainty maps from monocular depth estimation to stabilize optimization and improve convergence robustness; and (2) sparse-voxel surface regularization, enforcing geometric consistency to enhance surface sharpness and topological integrity—especially for small-scale voxels. By jointly optimizing voxel occupancy and surface geometry under these constraints, GeoSVR achieves efficient computation without sacrificing reconstruction quality. Extensive experiments demonstrate that GeoSVR outperforms state-of-the-art radiance fields and both implicit and explicit reconstruction methods across diverse complex scenes, delivering superior geometric accuracy, more complete surface coverage, and better preservation of fine geometric details.

Technology Category

Application Category

📝 Abstract
Reconstructing accurate surfaces with radiance fields has achieved remarkable progress in recent years. However, prevailing approaches, primarily based on Gaussian Splatting, are increasingly constrained by representational bottlenecks. In this paper, we introduce GeoSVR, an explicit voxel-based framework that explores and extends the under-investigated potential of sparse voxels for achieving accurate, detailed, and complete surface reconstruction. As strengths, sparse voxels support preserving the coverage completeness and geometric clarity, while corresponding challenges also arise from absent scene constraints and locality in surface refinement. To ensure correct scene convergence, we first propose a Voxel-Uncertainty Depth Constraint that maximizes the effect of monocular depth cues while presenting a voxel-oriented uncertainty to avoid quality degradation, enabling effective and robust scene constraints yet preserving highly accurate geometries. Subsequently, Sparse Voxel Surface Regularization is designed to enhance geometric consistency for tiny voxels and facilitate the voxel-based formation of sharp and accurate surfaces. Extensive experiments demonstrate our superior performance compared to existing methods across diverse challenging scenarios, excelling in geometric accuracy, detail preservation, and reconstruction completeness while maintaining high efficiency. Code is available at https://github.com/Fictionarry/GeoSVR.
Problem

Research questions and friction points this paper is trying to address.

Overcoming representational bottlenecks in surface reconstruction methods
Addressing sparse voxels' challenges in geometric accuracy and completeness
Enhancing geometric consistency for sharp and accurate surface formation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Voxel-Uncertainty Depth Constraint for robust scene constraints
Sparse Voxel Surface Regularization enhances geometric consistency
Explicit voxel-based framework using sparse voxels for reconstruction
🔎 Similar Papers
No similar papers found.
J
Jiahe Li
School of Computer Science and Engineering, State Key Laboratory of Complex Critical Software Environment, Jiangxi Research Institute, Beihang University
J
Jiawei Zhang
School of Computer Science and Engineering, State Key Laboratory of Complex Critical Software Environment, Jiangxi Research Institute, Beihang University
Youmin Zhang
Youmin Zhang
Rawmantic AI
computer vision
Xiao Bai
Xiao Bai
Professor of Computer Science, Beihang University
pattern recognitioncomputer vision
Jin Zheng
Jin Zheng
Lecturer in Data Science, University of Bristol
Xiaohan Yu
Xiaohan Yu
Macquarie University
computer visionsmart farmingultra-fine-grained visual categorization
L
Lin Gu
RIKEN AIP; The University of Tokyo