🤖 AI Summary
To address insufficient glare suppression in multi-exposure image fusion, this paper proposes an unsupervised, controllable fusion method grounded in Retinex theory. First, it explicitly models overexposed glare within the Retinex decomposition framework—a novel formulation not previously adopted. Second, it introduces a bidirectional consistency loss to constrain the shared reflectance component, ensuring physically plausible decoupling of illumination and reflectance. Third, it designs an adjustable global exposure fusion criterion, overcoming the limitations of conventional fixed-level fusion strategies. Integrating multi-scale feature modeling with unsupervised learning, the method effectively suppresses glare across challenging scenarios—including underexposed/overexposed inputs, exposure adjustment, and extreme homogeneous exposures—while significantly enhancing contrast and detail fidelity in fused images. The approach demonstrates strong generalization capability without requiring ground-truth supervision.
📝 Abstract
Multi-exposure image fusion consolidates multiple low dynamic range images of the same scene into a singular high dynamic range image. Retinex theory, which separates image illumination from scene reflectance, is naturally adopted to ensure consistent scene representation and effective information fusion across varied exposure levels. However, the conventional pixel-wise multiplication of illumination and reflectance inadequately models the glare effect induced by overexposure. To better adapt this theory for multi-exposure image fusion, we introduce an unsupervised and controllable method termed~ extbf{(Retinex-MEF)}. Specifically, our method decomposes multi-exposure images into separate illumination components and a shared reflectance component, and effectively modeling the glare induced by overexposure. Employing a bidirectional loss constraint to learn the common reflectance component, our approach effectively mitigates the glare effect. Furthermore, we establish a controllable exposure fusion criterion, enabling global exposure adjustments while preserving contrast, thus overcoming the constraints of fixed-level fusion. A series of experiments across multiple datasets, including underexposure-overexposure fusion, exposure control fusion, and homogeneous extreme exposure fusion, demonstrate the effective decomposition and flexible fusion capability of our model.