🤖 AI Summary
This work addresses the challenges in high dynamic range (HDR) novel view synthesis, where inadequate modeling of environmental illumination often leads to gradient loss and anomalous HDR values in over- or under-exposed regions. To overcome these limitations, the authors propose PhysHDR-GS, a novel framework that integrates a physically-based lighting model into 3D Gaussian Splatting for the first time. The approach employs a dual-branch architecture—comprising an image-exposure (IE) branch and a Gaussian-illumination (GI) branch—to jointly model intrinsic reflectance and adjustable environmental lighting, thereby recovering appearance details across varying exposures and illumination conditions. By introducing a cross-branch HDR consistency loss and a lighting-guided gradient scaling strategy, the method effectively mitigates gradient starvation and representation sparsity caused by exposure bias. Experiments demonstrate significant improvements in HDR reconstruction quality—achieving a 2.04 dB PSNR gain over HDR-GS on both real-world and synthetic datasets—while maintaining real-time rendering performance at 76 FPS.
📝 Abstract
High dynamic range novel view synthesis (HDR-NVS) reconstructs scenes with dynamic details by fusing multi-exposure low dynamic range (LDR) views, yet it struggles to capture ambient illumination-dependent appearance. Implicitly supervising HDR content by constraining tone-mapped results fails in correcting abnormal HDR values, and results in limited gradients for Gaussians in under/over-exposed regions. To this end, we introduce PhysHDR-GS, a physically inspired HDR-NVS framework that models scene appearance via intrinsic reflectance and adjustable ambient illumination. PhysHDR-GS employs a complementary image-exposure (IE) branch and Gaussian-illumination (GI) branch to faithfully reproduce standard camera observations and capture illumination-dependent appearance changes, respectively. During training, the proposed cross-branch HDR consistency loss provides explicit supervision for HDR content, while an illumination-guided gradient scaling strategy mitigates exposure-biased gradient starvation and reduces under-densified representations. Experimental results across realistic and synthetic datasets demonstrate our superiority in reconstructing HDR details (e.g., a PSNR gain of 2.04 dB over HDR-GS), while maintaining real-time rendering speed (up to 76 FPS). Code and models are available at https://huimin-zeng.github.io/PhysHDR-GS/.