DiffHDR: Re-Exposing LDR Videos with Video Diffusion Models

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of reconstructing high dynamic range (HDR) video from 8-bit low dynamic range (LDR) inputs, where highlight and shadow details are often lost due to saturation and quantization, hindering accurate radiance recovery. To tackle this, we introduce—for the first time—a video diffusion model for LDR-to-HDR reconstruction, formulating the task as generative radiance inpainting in a Log-Gamma color space latent representation. Our approach leverages spatiotemporal priors from a pretrained video diffusion model and supports controllable conversion guided by either text prompts or reference images. Furthermore, we devise a synthetic data generation strategy based on static HDRI maps to construct high-quality HDR video training pairs. Experiments demonstrate that our method significantly outperforms existing techniques in both radiometric fidelity and temporal consistency, producing photorealistic HDR videos capable of supporting substantial re-exposure adjustments.
📝 Abstract
Most digital videos are stored in 8-bit low dynamic range (LDR) formats, where much of the original high dynamic range (HDR) scene radiance is lost due to saturation and quantization. This loss of highlight and shadow detail precludes mapping accurate luminance to HDR displays and limits meaningful re-exposure in post-production workflows. Although techniques have been proposed to convert LDR images to HDR through dynamic range expansion, they struggle to restore realistic detail in the over- and underexposed regions. To address this, we present DiffHDR, a framework that formulates LDR-to-HDR conversion as a generative radiance inpainting task within the latent space of a video diffusion model. By operating in Log-Gamma color space, DiffHDR leverages spatio-temporal generative priors from a pretrained video diffusion model to synthesize plausible HDR radiance in over- and underexposed regions while recovering the continuous scene radiance of the quantized pixels. Our framework further enables controllable LDR-to-HDR video conversion guided by text prompts or reference images. To address the scarcity of paired HDR video data, we develop a pipeline that synthesizes high-quality HDR video training data from static HDRI maps. Extensive experiments demonstrate that DiffHDR significantly outperforms state-of-the-art approaches in radiance fidelity and temporal stability, producing realistic HDR videos with considerable latitude for re-exposure.
Problem

Research questions and friction points this paper is trying to address.

LDR-to-HDR conversion
dynamic range expansion
radiance recovery
overexposed regions
underexposed regions
Innovation

Methods, ideas, or system contributions that make the work stand out.

video diffusion model
LDR-to-HDR conversion
radiance inpainting
Log-Gamma color space
controllable generation
🔎 Similar Papers
No similar papers found.