Towards Lightest Low-Light Image Enhancement Architecture for Mobile Devices

📅 2025-07-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Mobile low-light image enhancement faces challenges in balancing model lightweighting and real-time inference while heavily relying on large-scale annotated data. Method: This paper proposes LiteIE, an ultra-lightweight unsupervised framework featuring: (i) a backbone-agnostic feature extractor with only two convolutional layers; (ii) a parameter-free, parameter-sharing iterative restoration module; and (iii) an unsupervised loss function integrating exposure control, edge-aware smoothing, and multi-scale color consistency. Contribution/Results: LiteIE achieves high-quality enhancement with only 58 parameters—the first such result under extreme structural simplification—attaining 19.04 dB PSNR on the LOL dataset, surpassing state-of-the-art (SOTA) methods by 1.4 dB and reducing parameter count to just 0.07% of SOTA. On Snapdragon 8 Gen 3, it enables real-time 4K image processing at 30 FPS, significantly improving cross-scene generalization and deployment efficiency.

Technology Category

Application Category

📝 Abstract
Real-time low-light image enhancement on mobile and embedded devices requires models that balance visual quality and computational efficiency. Existing deep learning methods often rely on large networks and labeled datasets, limiting their deployment on resource-constrained platforms. In this paper, we propose LiteIE, an ultra-lightweight unsupervised enhancement framework that eliminates dependence on large-scale supervision and generalizes well across diverse conditions. We design a backbone-agnostic feature extractor with only two convolutional layers to produce compact image features enhancement tensors. In addition, we develop a parameter-free Iterative Restoration Module, which reuses the extracted features to progressively recover fine details lost in earlier enhancement steps, without introducing any additional learnable parameters. We further propose an unsupervised training objective that integrates exposure control, edge-aware smoothness, and multi-scale color consistency losses. Experiments on the LOL dataset, LiteIE achieves 19.04 dB PSNR, surpassing SOTA by 1.4 dB while using only 0.07% of its parameters. On a Snapdragon 8 Gen 3 mobile processor, LiteIE runs at 30 FPS for 4K images with just 58 parameters, enabling real-time deployment on edge devices. These results establish LiteIE as an efficient and practical solution for low-light enhancement on resource-limited platforms.
Problem

Research questions and friction points this paper is trying to address.

Balancing visual quality and computational efficiency for mobile low-light enhancement
Eliminating dependence on large networks and labeled datasets
Enabling real-time 4K image enhancement on resource-limited devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ultra-lightweight unsupervised enhancement framework
Backbone-agnostic feature extractor with two layers
Parameter-free Iterative Restoration Module
🔎 Similar Papers
No similar papers found.
G
Guangrui Bai
Key Laboratory of Precision and Intelligent Chemistry, Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230026, China.
H
Hailong Yan
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.
Wenhai Liu
Wenhai Liu
Shanghai Jiao Tong University
deep learningrobotic grasping
Y
Yahui Deng
Key Laboratory of Precision and Intelligent Chemistry, Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230026, China.
E
Erbao Dong
Key Laboratory of Precision and Intelligent Chemistry, Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230026, China.