🤖 AI Summary
To address flickering and ghosting artifacts in alternating-exposure HDR video reconstruction caused by exposure fluctuations in reference frames, this paper proposes a dual-camera cooperative capture framework and an Exposure-Adaptive Fusion Network (EAFNet). The primary camera captures a stable single-exposure reference sequence, while the auxiliary camera synchronously acquires multi-exposure auxiliary frames. EAFNet incorporates a pre-alignment subnetwork and a reference-dominant asymmetric cross-feature fusion module to achieve precise feature alignment and dynamic weight assignment across exposures. Additionally, a discrete wavelet transform (DWT)-based multi-scale reconstruction scheme enhances fine-detail fidelity. Extensive evaluations on multiple benchmark datasets demonstrate that our method significantly suppresses flickering and motion-related artifacts, achieving state-of-the-art HDR video reconstruction quality. The source code and dataset are publicly available.
📝 Abstract
In HDR video reconstruction, exposure fluctuations in reference images from alternating exposure methods often result in flickering. To address this issue, we propose a dual-camera system (DCS) for HDR video acquisition, where one camera is assigned to capture consistent reference sequences, while the other is assigned to capture non-reference sequences for information supplementation. To tackle the challenges posed by video data, we introduce an exposure-adaptive fusion network (EAFNet) to achieve more robust results. EAFNet introduced a pre-alignment subnetwork to explore the influence of exposure, selectively emphasizing the valuable features across different exposure levels. Then, the enhanced features are fused by the asymmetric cross-feature fusion subnetwork, which explores reference-dominated attention maps to improve image fusion by aligning cross-scale features and performing cross-feature fusion. Finally, the reconstruction subnetwork adopts a DWT-based multiscale architecture to reduce ghosting artifacts and refine features at different resolutions. Extensive experimental evaluations demonstrate that the proposed method achieves state-of-the-art performance on different datasets, validating the great potential of the DCS in HDR video reconstruction. The codes and data captured by DCS will be available at https://github.com/zqqqyu/DCS.