🤖 AI Summary
This work addresses the limited generalization of existing inverse tone mapping methods under real-world degradations, stylistic variations, and diverse camera pipelines by proposing a pre-trained diffusion Transformer-based approach for SDR-to-HDR conversion. The method integrates luminance, spatial, frequency, and perceptual features through a physics-guided adaptation module, a perceptual cross-modulation layer, and an HDR residual coupler to achieve high-fidelity reconstruction. A large-scale SDR-HDR dataset is introduced alongside a new evaluation benchmark. By incorporating low-rank residual attention injection, FiLM conditional modulation, hierarchical adaptive fusion, and a rational quadratic spline decoder, the model significantly outperforms state-of-the-art methods across multiple benchmarks—particularly in luminance reconstruction and color perceptual fidelity—while introducing minimal additional parameters.
📝 Abstract
The rapid adoption of HDR-capable devices has created a pressing need to convert the 8-bit Standard Dynamic Range (SDR) content into perceptually and physically accurate 10-bit High Dynamic Range (HDR). Existing inverse tone-mapping (ITM) methods often rely on fixed tone-mapping operators that struggle to generalize to real-world degradations, stylistic variations, and camera pipelines, frequently producing clipped highlights, desaturated colors, or unstable tone reproduction. We introduce LumaFlux, a first physically and perceptually guided diffusion transformer (DiT) for SDR-to-HDR reconstruction by adapting a large pretrained DiT. Our LumaFlux introduces (1) a Physically-Guided Adaptation (PGA) module that injects luminance, spatial descriptors, and frequency cues into attention through low-rank residuals; (2) a Perceptual Cross-Modulation (PCM) layer that stabilizes chroma and texture via FiLM conditioning from vision encoder features; and (3) an HDR Residual Coupler that fuses physical and perceptual signals under a timestep- and layer-adaptive modulation schedule. Finally, a lightweight Rational-Quadratic Spline decoder reconstructs smooth, interpretable tone fields for highlight and exposure expansion, enhancing the output of the VAE decoder to generate HDR. To enable robust HDR learning, we curate the first large-scale SDR-HDR training corpus. For fair and reproducible comparison, we further establish a new evaluation benchmark, comprising HDR references and corresponding expert-graded SDR versions. Across benchmarks, LumaFlux outperforms state-of-the-art baselines, achieving superior luminance reconstruction and perceptual color fidelity with minimal additional parameters.