🤖 AI Summary
This work addresses the longstanding challenge in low-light image enhancement of simultaneously achieving model lightweighting and high visual quality. We propose a single-layer convolutional neural network (CNN) architecture that fundamentally rethinks design efficiency. Our method introduces: (1) the first automatic reparameterization-driven hierarchical neural architecture search (NAS), which decouples optimization of model expressiveness from structural efficiency; (2) performance breakthroughs in single-layer networks enabled by learnable reparameterization; and (3) seamless deployment across heterogeneous hardware—including CPU, GPU, NPU, and DSP—without architectural modification. Evaluated on multiple standard benchmarks, our approach achieves superior visual quality over state-of-the-art (SOTA) methods while delivering significantly faster inference speed than the current fastest solutions. To our knowledge, this is the first work to realize real-time, high-performance low-light enhancement using a single-layer CNN, establishing a new Pareto-optimal frontier between accuracy and efficiency.
📝 Abstract
Deep learning-based low-light image enhancers have made significant progress in recent years, with a trend towards achieving satisfactory visual quality while gradually reducing the number of parameters and improving computational efficiency. In this work, we aim to delving into the limits of image enhancers both from visual quality and computational efficiency, while striving for both better performance and faster processing. To be concrete, by rethinking the task demands, we build an explicit connection, i.e., visual quality and computational efficiency are corresponding to model learning and structure design, respectively. Around this connection, we enlarge parameter space by introducing the re-parameterization for ample model learning of a pre-defined minimalist network (e.g., just one layer), to avoid falling into a local solution. To strengthen the structural representation, we define a hierarchical search scheme for discovering a task-oriented re-parameterized structure, which also provides powerful support for efficiency. Ultimately, this achieves efficient low-light image enhancement using only a single convolutional layer, while maintaining excellent visual quality. Experimental results show our sensible superiority both in quality and efficiency against recently-proposed methods. Especially, our running time on various platforms (e.g., CPU, GPU, NPU, DSP) consistently moves beyond the existing fastest scheme. The source code will be released at https://github.com/vis-opt-group/AR-LLIE.