🤖 AI Summary
In multi-view global illumination rendering, conventional photon mapping suffers from severe redundancy due to per-view independent photon tracing and kernel-based estimation. To address this, we propose the Gaussian Photon Field (GPF)—the first learnable, anisotropic, and continuously differentiable 3D photon representation. GPF models discrete photon distributions as a parametric Gaussian field, unifying physical photon transport with neural scene representation to enable single-pass training, iterative-free multi-view rendering, and differentiable radiance evaluation. Our method leverages SPPM initialization, multi-view radiance supervision, implicit rendering of the continuous field, and supports differentiable ray–scene intersection. Evaluated on complex scenes featuring caustics and specular–diffuse coupling, GPF achieves photon-level accuracy while accelerating rendering by several orders of magnitude over traditional photon mapping—significantly outperforming both NeRF and existing photon-mapping variants.
📝 Abstract
Accurately modeling light transport is essential for realistic image synthesis. Photon mapping provides physically grounded estimates of complex global illumination effects such as caustics and specular-diffuse interactions, yet its per-view radiance estimation remains computationally inefficient when rendering multiple views of the same scene. The inefficiency arises from independent photon tracing and stochastic kernel estimation at each viewpoint, leading to inevitable redundant computation. To accelerate multi-view rendering, we reformulate photon mapping as a continuous and reusable radiance function. Specifically, we introduce the Gaussian Photon Field (GPF), a learnable representation that encodes photon distributions as anisotropic 3D Gaussian primitives parameterized by position, rotation, scale, and spectrum. GPF is initialized from physically traced photons in the first SPPM iteration and optimized using multi-view supervision of final radiance, distilling photon-based light transport into a continuous field. Once trained, the field enables differentiable radiance evaluation along camera rays without repeated photon tracing or iterative refinement. Extensive experiments on scenes with complex light transport, such as caustics and specular-diffuse interactions, demonstrate that GPF attains photon-level accuracy while reducing computation by orders of magnitude, unifying the physical rigor of photon-based rendering with the efficiency of neural scene representations.