🤖 AI Summary
To address the degradation of in-vehicle camera image quality under low-light nighttime conditions and the prohibitive computational overhead of existing enhancement methods—hindering embedded deployment—this paper proposes UltraFast-LieNET, an ultra-lightweight real-time network. Its core contributions are: (i) a dynamic shift convolution (DSConv) with only 12 parameters; (ii) a multi-scale shift residual block (MSRB) forming a minimal, tunable architecture of just 36 parameters; and (iii) a multi-level gradient-aware loss function to improve training stability. Evaluated on the LOLI-Street dataset, UltraFast-LieNET achieves a PSNR of 26.51 dB—4.6 dB higher than the state-of-the-art—while requiring only 180 total parameters. Comprehensive validation across four benchmark datasets confirms its optimal trade-off between real-time inference capability and enhancement quality. This work significantly advances the practical deployment of low-light image enhancement in resource-constrained automotive vision systems.
📝 Abstract
In low-light environments like nighttime driving, image degradation severely challenges in-vehicle camera safety. Since existing enhancement algorithms are often too computationally intensive for vehicular applications, we propose UltraFast-LieNET, a lightweight multi-scale shifted convolutional network for real-time low-light image enhancement. We introduce a Dynamic Shifted Convolution (DSConv) kernel with only 12 learnable parameters for efficient feature extraction. By integrating DSConv with varying shift distances, a Multi-scale Shifted Residual Block (MSRB) is constructed to significantly expand the receptive field. To mitigate lightweight network instability, a residual structure and a novel multi-level gradient-aware loss function are incorporated. UltraFast-LieNET allows flexible parameter configuration, with a minimum size of only 36 parameters. Results on the LOLI-Street dataset show a PSNR of 26.51 dB, outperforming state-of-the-art methods by 4.6 dB while utilizing only 180 parameters. Experiments across four benchmark datasets validate its superior balance of real-time performance and enhancement quality under limited resources. Code is available at https://githubhttps://github.com/YuhanChen2024/UltraFast-LiNET