🤖 AI Summary
HDR fusion models suffer from limited generalization due to the scarcity and high cost of acquiring real-world dynamic HDR data. To address this, we introduce S2R-HDR—the first large-scale, high-fidelity synthetic HDR fusion dataset comprising 24,000 samples—generated using Unreal Engine 5 to simulate photorealistic HDR scenes with dynamic objects, complex motion, and physically accurate lighting. We further design an efficient rendering pipeline coupled with precise multi-exposure calibration. To bridge the synthetic-to-real domain gap, we propose the S2R-Adapter, a novel domain adaptation module that effectively mitigates domain shift. Our method achieves state-of-the-art reconstruction performance across multiple real-world HDR benchmarks, demonstrating significantly improved generalization. The dataset, source code, and trained models are publicly released.
📝 Abstract
The generalization of learning-based high dynamic range (HDR) fusion is often limited by the availability of training data, as collecting large-scale HDR images from dynamic scenes is both costly and technically challenging. To address these challenges, we propose S2R-HDR, the first large-scale high-quality synthetic dataset for HDR fusion, with 24,000 HDR samples. Using Unreal Engine 5, we design a diverse set of realistic HDR scenes that encompass various dynamic elements, motion types, high dynamic range scenes, and lighting. Additionally, we develop an efficient rendering pipeline to generate realistic HDR images. To further mitigate the domain gap between synthetic and real-world data, we introduce S2R-Adapter, a domain adaptation designed to bridge this gap and enhance the generalization ability of models. Experimental results on real-world datasets demonstrate that our approach achieves state-of-the-art HDR reconstruction performance. Dataset and code will be available at https://openimaginglab.github.io/S2R-HDR.