π€ AI Summary
This work addresses the challenge of high dynamic range (HDR) reconstruction from modulo imaging, where natural image edges are often indistinguishable from artificial wrap-around discontinuities. To tackle this issue, the authors propose a learning-based HDR reconstruction framework that incorporates scale-equivariant regularization to enforce consistency across exposures. The method introduces a feature-enhanced input architecture that effectively fuses the raw modulo image, wrapped finite differences, and a closed-form initialization, thereby significantly improving the networkβs ability to discriminate between genuine image structures and wrapping artifacts. Extensive experiments demonstrate that the proposed approach achieves state-of-the-art performance on both perceptual and linear HDR quality metrics, yielding substantially improved reconstruction accuracy.
π Abstract
Modulo imaging enables high dynamic range (HDR) acquisition by cyclically wrapping saturated intensities, but accurate reconstruction remains challenging due to ambiguities between natural image edges and artificial wrap discontinuities. This work proposes a learning-based HDR restoration framework that incorporates two key strategies: (i) a scale-equivariant regularization that enforces consistency under exposure variations, and (ii) a feature lifting input design combining the raw modulo image, wrapped finite differences, and a closed-form initialization. Together, these components enhance the network's ability to distinguish true structure from wrapping artifacts, yielding state-of-the-art performance across perceptual and linear HDR quality metrics.