🤖 AI Summary
To address the limitations of paired-data dependency and degraded performance in unpaired LDR-to-HDR image translation, this paper proposes a semantic-aware self-supervised cycle-consistent framework. Methodologically, it introduces (1) a novel semantic consistency encoder with an associated loss to explicitly enforce cross-domain semantic preservation; (2) a gradient-aware generator jointly optimized via adversarial loss, cycle-consistency loss, perceptual loss, and gradient-domain regularization—effectively suppressing blur and color-shift artifacts; and (3) the first systematic realization of high-fidelity HDR reconstruction under fully unpaired settings. Extensive experiments demonstrate state-of-the-art performance across multiple benchmarks, with significant improvements in detail fidelity, global contrast, and visual naturalness.
📝 Abstract
Low Dynamic Range (LDR) to High Dynamic Range (HDR) image translation is an important computer vision problem. There is a significant amount of research utilizing both conventional non-learning methods and modern data-driven approaches, focusing on using both single-exposed and multi-exposed LDR for HDR image reconstruction. However, most current state-of-the-art methods require high-quality paired {LDR,HDR} datasets for model training. In addition, there is limited literature on using unpaired datasets for this task where the model learns a mapping between domains, i.e., LDR to HDR. To address limitations of current methods, such as the paired data constraint , as well as unwanted blurring and visual artifacts in the reconstructed HDR, we propose a method that uses a modified cycle-consistent adversarial architecture and utilizes unpaired {LDR,HDR} datasets for training. The method introduces novel generators to address visual artifact removal and an encoder and loss to address semantic consistency, another under-explored topic. The method achieves state-of-the-art results across several benchmark datasets and reconstructs high-quality HDR images.