🤖 AI Summary
Existing inverse rendering approaches rely on high-dynamic-range (HDR) inputs, limiting their applicability to ubiquitous low-dynamic-range (LDR) photographs. This paper introduces the first end-to-end framework that jointly reconstructs indoor scene geometry, spatially varying HDR illumination, physically based BRDF materials, and camera response functions directly from multi-view LDR images. Our method integrates differentiable rendering, physics-based illumination modeling, nonlinear camera response estimation, and multi-view geometric consistency constraints—eliminating the need for HDR acquisition entirely. Evaluated on both synthetic and real-world data, our approach reduces HDR illumination reconstruction error by 32% and improves material inversion PSNR by 5.8 dB. It enables high-fidelity relighting and seamless compositing of virtual objects into real scenes. By removing the HDR dependency, our framework significantly enhances the practicality and general applicability of inverse rendering in everyday imaging conditions.
📝 Abstract
Inverse rendering seeks to recover 3D geometry, surface material, and lighting from captured images, enabling advanced applications such as novel-view synthesis, relighting, and virtual object insertion. However, most existing techniques rely on high dynamic range (HDR) images as input, limiting accessibility for general users. In response, we introduce IRIS, an inverse rendering framework that recovers the physically based material, spatially-varying HDR lighting, and camera response functions from multi-view, low-dynamic-range (LDR) images. By eliminating the dependence on HDR input, we make inverse rendering technology more accessible. We evaluate our approach on real-world and synthetic scenes and compare it with state-of-the-art methods. Our results show that IRIS effectively recovers HDR lighting, accurate material, and plausible camera response functions, supporting photorealistic relighting and object insertion.