Physically Inspired Gaussian Splatting for HDR Novel View Synthesis

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges in high dynamic range (HDR) novel view synthesis, where inadequate modeling of environmental illumination often leads to gradient loss and anomalous HDR values in over- or under-exposed regions. To overcome these limitations, the authors propose PhysHDR-GS, a novel framework that integrates a physically-based lighting model into 3D Gaussian Splatting for the first time. The approach employs a dual-branch architecture—comprising an image-exposure (IE) branch and a Gaussian-illumination (GI) branch—to jointly model intrinsic reflectance and adjustable environmental lighting, thereby recovering appearance details across varying exposures and illumination conditions. By introducing a cross-branch HDR consistency loss and a lighting-guided gradient scaling strategy, the method effectively mitigates gradient starvation and representation sparsity caused by exposure bias. Experiments demonstrate significant improvements in HDR reconstruction quality—achieving a 2.04 dB PSNR gain over HDR-GS on both real-world and synthetic datasets—while maintaining real-time rendering performance at 76 FPS.
📝 Abstract
High dynamic range novel view synthesis (HDR-NVS) reconstructs scenes with dynamic details by fusing multi-exposure low dynamic range (LDR) views, yet it struggles to capture ambient illumination-dependent appearance. Implicitly supervising HDR content by constraining tone-mapped results fails in correcting abnormal HDR values, and results in limited gradients for Gaussians in under/over-exposed regions. To this end, we introduce PhysHDR-GS, a physically inspired HDR-NVS framework that models scene appearance via intrinsic reflectance and adjustable ambient illumination. PhysHDR-GS employs a complementary image-exposure (IE) branch and Gaussian-illumination (GI) branch to faithfully reproduce standard camera observations and capture illumination-dependent appearance changes, respectively. During training, the proposed cross-branch HDR consistency loss provides explicit supervision for HDR content, while an illumination-guided gradient scaling strategy mitigates exposure-biased gradient starvation and reduces under-densified representations. Experimental results across realistic and synthetic datasets demonstrate our superiority in reconstructing HDR details (e.g., a PSNR gain of 2.04 dB over HDR-GS), while maintaining real-time rendering speed (up to 76 FPS). Code and models are available at https://huimin-zeng.github.io/PhysHDR-GS/.
Problem

Research questions and friction points this paper is trying to address.

HDR novel view synthesis
ambient illumination
gradient starvation
tone mapping
multi-exposure fusion
Innovation

Methods, ideas, or system contributions that make the work stand out.

HDR novel view synthesis
physically based rendering
Gaussian splatting
illumination modeling
gradient scaling
🔎 Similar Papers
No similar papers found.
Huimin Zeng
Huimin Zeng
Northeastern University
computer vision
Yue Bai
Yue Bai
Northwestern University, Northeastern University
Multi-modal learningSparse network trainingMask learning
H
Hailing Wang
Department of Electrical and Computer Engineering, Northeastern University
Y
Yun Fu
Department of Electrical and Computer Engineering, Northeastern University; Khoury College of Computer Science, Northeastern University