🤖 AI Summary
Existing image-to-image translation (I2IT) methods for contrast enhancement often suffer from loss of fine-grained textures and low-level details. This paper proposes a Laplacian-pyramid-driven, multi-scale perceptual loss framework coupled with multi-resolution discriminators to achieve high-fidelity reconstruction across diverse illumination conditions. Our key contributions are: (1) a hierarchical reconstruction loss built upon the Laplacian pyramid, explicitly enforcing detail fidelity at multiple scales; (2) a set of multi-resolution discriminators jointly optimized to preserve both global structural coherence and local textural realism; and (3) a unified loss function that jointly optimizes pixel-level accuracy and high-level perceptual quality. Evaluated on the SICE dataset, our method achieves state-of-the-art performance, significantly improving shadow texture preservation and highlight detail recovery while demonstrating superior generalization over existing approaches.
📝 Abstract
Contrast enhancement, a key aspect of image-to-image translation (I2IT), improves visual quality by adjusting intensity differences between pixels. However, many existing methods struggle to preserve fine-grained details, often leading to the loss of low-level features. This paper introduces LapLoss, a novel approach designed for I2IT contrast enhancement, based on the Laplacian pyramid-centric networks, forming the core of our proposed methodology. The proposed approach employs a multiple discriminator architecture, each operating at a different resolution to capture high-level features, in addition to maintaining low-level details and textures under mixed lighting conditions. The proposed methodology computes the loss at multiple scales, balancing reconstruction accuracy and perceptual quality to enhance overall image generation. The distinct blend of the loss calculation at each level of the pyramid, combined with the architecture of the Laplacian pyramid enables LapLoss to exceed contemporary contrast enhancement techniques. This framework achieves state-of-the-art results, consistently performing well across different lighting conditions in the SICE dataset.