🤖 AI Summary
Existing low-light image enhancement (LLIE) methods rely on over-parameterized deep models, suffering from two types of parameter redundancy: *parameter harmfulness*—where certain parameters degrade performance—and *parameter uselessness*—where others contribute negligibly to representation learning. This work is the first to systematically identify and characterize both redundancies in LLIE models. To address them, we propose a Dynamic Attention Redistribution (ADR) mechanism that dynamically recalibrates attention weights to suppress harmful feature responses. Furthermore, we introduce a Parameter Orthogonal Generation (POG) framework, which employs orthogonal embedding and decoupled modeling to achieve compact yet expressive representations. Extensive experiments demonstrate that our method achieves state-of-the-art PSNR and SSIM scores across multiple benchmarks, while reducing model parameters by 18% and accelerating inference by 23%. The source code is publicly available.
📝 Abstract
Low-light image enhancement (LLIE) is a fundamental task in computational photography, aiming to improve illumination, reduce noise, and enhance the image quality of low-light images. While recent advancements primarily focus on customizing complex neural network models, we have observed significant redundancy in these models, limiting further performance improvement. In this paper, we investigate and rethink the model redundancy for LLIE, identifying parameter harmfulness and parameter uselessness. Inspired by the rethinking, we propose two innovative techniques to mitigate model redundancy while improving the LLIE performance: Attention Dynamic Reallocation (ADR) and Parameter Orthogonal Generation (POG). ADR dynamically reallocates appropriate attention based on original attention, thereby mitigating parameter harmfulness. POG learns orthogonal basis embeddings of parameters and prevents degradation to static parameters, thereby mitigating parameter uselessness. Experiments validate the effectiveness of our techniques. We will release the code to the public.