High-Fidelity and Generalizable Neural Surface Reconstruction with Sparse Feature Volumes

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scalability limitations of dense volumetric representations in few-shot neural surface reconstruction—particularly their prohibitive memory consumption at high resolutions—this paper introduces a sparse feature volume representation coupled with a two-stage training framework. By constructing and querying voxels exclusively within high-occupancy regions, and integrating efficient feature aggregation with sparse volumetric rendering, our approach overcomes the memory bottlenecks inherent in dense occupancy assumptions. The method enables reconstruction at 512³ resolution, reducing memory footprint by over 50× on standard GPUs while improving geometric accuracy and significantly outperforming existing state-of-the-art methods. Key contributions include: (1) the first differentiable sparse voxel representation specifically designed for few-shot surface reconstruction; and (2) a joint optimization mechanism that unifies sparse sampling and rendering to balance computational efficiency and reconstruction fidelity.

Technology Category

Application Category

📝 Abstract
Generalizable neural surface reconstruction has become a compelling technique to reconstruct from few images without per-scene optimization, where dense 3D feature volume has proven effective as a global representation of scenes. However, the dense representation does not scale well to increasing voxel resolutions, severely limiting the reconstruction quality. We thus present a sparse representation method, that maximizes memory efficiency and enables significantly higher resolution reconstructions on standard hardware. We implement this through a two-stage approach: First training a network to predict voxel occupancies from posed images and associated depth maps, then computing features and performing volume rendering only in voxels with sufficiently high occupancy estimates. To support this sparse representation, we developed custom algorithms for efficient sampling, feature aggregation, and querying from sparse volumes-overcoming the dense-volume assumptions inherent in existing works. Experiments on public datasets demonstrate that our approach reduces storage requirements by more than 50 times without performance degradation, enabling reconstructions at $512^3$ resolution compared to the typical $128^3$ on similar hardware, and achieving superior reconstruction accuracy over current state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Improves neural surface reconstruction quality with sparse feature volumes
Reduces storage needs by 50x while maintaining performance
Enables higher resolution reconstructions (512^3 vs typical 128^3)
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse feature volumes for efficient reconstruction
Two-stage voxel occupancy prediction and rendering
Custom algorithms for sparse volume operations
🔎 Similar Papers
No similar papers found.