UltraFusion: Ultra High Dynamic Imaging using Exposure Fusion

📅 2025-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing exposure fusion methods for ultra-high dynamic range (UHDR) imaging—supporting up to 9-stop exposure differences—suffer from registration sensitivity, illumination inconsistency, and tone-mapping distortion. To address these issues, this work reformulates exposure fusion as a guided image inpainting task: the underexposed image serves as a soft guidance to reconstruct highlight details in overexposed regions, while a generative prior enables natural tone mapping. Our approach leverages a deep generative model that jointly performs multi-scale feature alignment and adaptive exposure-weight learning. Evaluated on the newly introduced UltraFusion Dataset—featuring 9-stop exposure brackets—our method substantially outperforms HDR-Transformer, producing UHDR images with rich detail, no artifacts, and visually natural appearance. To foster reproducibility and further research, we publicly release our source code, the UltraFusion Dataset, and an interactive online demo.

Technology Category

Application Category

📝 Abstract
Capturing high dynamic range (HDR) scenes is one of the most important issues in camera design. Majority of cameras use exposure fusion technique, which fuses images captured by different exposure levels, to increase dynamic range. However, this approach can only handle images with limited exposure difference, normally 3-4 stops. When applying to very high dynamic scenes where a large exposure difference is required, this approach often fails due to incorrect alignment or inconsistent lighting between inputs, or tone mapping artifacts. In this work, we propose UltraFusion, the first exposure fusion technique that can merge input with 9 stops differences. The key idea is that we model the exposure fusion as a guided inpainting problem, where the under-exposed image is used as a guidance to fill the missing information of over-exposed highlight in the over-exposed region. Using under-exposed image as a soft guidance, instead of a hard constrain, our model is robust to potential alignment issue or lighting variations. Moreover, utilizing the image prior of the generative model, our model also generates natural tone mapping, even for very high-dynamic range scene. Our approach outperforms HDR-Transformer on latest HDR benchmarks. Moreover, to test its performance in ultra high dynamic range scene, we capture a new real-world exposure fusion benchmark, UltraFusion Dataset, with exposure difference up to 9 stops, and experiments show that model~can generate beautiful and high-quality fusion results under various scenarios. An online demo is provided at https://openimaginglab.github.io/UltraFusion/.
Problem

Research questions and friction points this paper is trying to address.

High Dynamic Range Imaging
Exposure Fusion
Image Artefacts
Innovation

Methods, ideas, or system contributions that make the work stand out.

UltraFusion
HDR Imaging
Multi-exposure Photo Merge
🔎 Similar Papers
No similar papers found.