AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction

📅 2024-10-02
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing neural radiance field (NeRF) methods struggle to simultaneously achieve high-fidelity geometric reconstruction and photorealistic rendering. This paper proposes a unified framework that jointly optimizes geometry and appearance within a single signed distance function (SDF) representation: it integrates multi-granularity implicit surface modeling with physics-inspired anisotropic spherical Gaussians (ASGs) for decoupled optimization of geometry and reflectance properties; further, it constructs a hybrid radiance field—comprising diffuse and specular components—to enable end-to-end differentiable rendering. To our knowledge, this is the first SDF-based approach to concurrently enhance both geometric fidelity and novel-view synthesis quality. Quantitatively, it reduces geometric error by 38% and improves novel-view PSNR by 2.1 dB across multiple benchmarks, without requiring object-level hyperparameter tuning.

Technology Category

Application Category

📝 Abstract
Neural radiance fields have recently revolutionized novel-view synthesis and achieved high-fidelity renderings. However, these methods sacrifice the geometry for the rendering quality, limiting their further applications including relighting and deformation. How to synthesize photo-realistic rendering while reconstructing accurate geometry remains an unsolved problem. In this work, we present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction. Different from previous neural surfaces, our fused-granularity geometry structure balances the overall structures and fine geometric details, producing accurate geometry reconstruction. To disambiguate geometry from reflective appearance, we introduce blended radiance fields to model diffuse and specularity following the anisotropic spherical Gaussian encoding, a physics-based rendering pipeline. With these designs, AniSDF can reconstruct objects with complex structures and produce high-quality renderings. Furthermore, our method is a unified model that does not require complex hyperparameter tuning for specific objects. Extensive experiments demonstrate that our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
Problem

Research questions and friction points this paper is trying to address.

Enhances 3D geometry reconstruction quality.
Balances overall structure and fine details.
Improves rendering with physics-based encoding.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fused-granularity neural surfaces
Anisotropic spherical Gaussian encoding
Unified model without hyperparameter tuning
Jingnan Gao
Jingnan Gao
Ph.D. student at Shanghai Jiao Tong University
Computer Vision
Z
Zhuo Chen
Shanghai Jiao Tong University
Y
Yichao Yan
Shanghai Jiao Tong University
X
Xiaokang Yang
Shanghai Jiao Tong University