🤖 AI Summary
Traditional inverse rendering faces a trade-off between triangle meshes—compatible with standard graphics pipelines but limited in global illumination modeling—and neural fields—which achieve high reconstruction fidelity yet struggle with indirect lighting. This paper proposes the first end-to-end framework unifying their strengths, leveraging a pre-trained neural field as geometric and appearance prior while jointly optimizing a differentiable triangle mesh, PBR-compliant material parameters, and HDR illumination probes. Key contributions include: (1) SDF-initialized DMTet for differentiable mesh generation; (2) the Neural Radiance Cache (NRC), the first explicit decomposition of indirect illumination and surface reflectance to eliminate lighting-material aliasing; and (3) single-bounce differentiable Monte Carlo path tracing for efficient global illumination modeling. Experiments demonstrate a 42% reduction in indirect illumination error, state-of-the-art material separation fidelity, and significant improvements in mesh geometric accuracy, physical material consistency, and illumination dynamic range.
📝 Abstract
Traditional inverse rendering techniques are based on textured meshes, which naturally adapts to modern graphics pipelines, but costly differentiable multi-bounce Monte Carlo (MC) ray tracing poses challenges for modeling global illumination. Recently, neural fields has demonstrated impressive reconstruction quality but falls short in modeling indirect illumination. In this paper, we introduce a simple yet efficient inverse rendering framework that combines the strengths of both methods. Specifically, given pre-trained neural field representing the scene, we can obtain an initial estimate of the signed distance field (SDF) and create a Neural Radiance Cache (NRC), an enhancement over the traditional radiance cache used in real-time rendering. By using the former to initialize differentiable marching tetrahedrons (DMTet) and the latter to model indirect illumination, we can compute the global illumination via single-bounce differentiable MC ray tracing and jointly optimize the geometry, material, and light through back propagation. Experiments demonstrate that, compared to previous methods, our approach effectively prevents indirect illumination effects from being baked into materials, thus obtaining the high-quality reconstruction of triangle mesh, Physically-Based (PBR) materials, and High Dynamic Range (HDR) light probe.