GM-MoE: Low-Light Enhancement with Gated-Mechanism Mixture-of-Experts

📅 2025-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing low-light enhancement methods suffer from poor generalization, limiting their applicability across diverse domains such as autonomous driving and 3D reconstruction. To address this, we propose the first Mixture-of-Experts (MoE) framework for low-light enhancement, featuring a gating-driven dynamic weighting mechanism. Our architecture comprises three parallel experts, each specialized for distinct enhancement subtasks; a learnable gating module enables cross-domain adaptive weight allocation; and each expert incorporates a local-global multi-scale feature fusion module. The entire model is end-to-end trainable. Extensive experiments demonstrate state-of-the-art generalization: our method achieves the best overall performance among 25 competing approaches. It attains SOTA PSNR on five benchmarks and SOTA SSIM on four benchmarks, significantly improving cross-scene robustness and task versatility.

Technology Category

Application Category

📝 Abstract
Low-light enhancement has wide applications in autonomous driving, 3D reconstruction, remote sensing, surveillance, and so on, which can significantly improve information utilization. However, most existing methods lack generalization and are limited to specific tasks such as image recovery. To address these issues, we propose extbf{Gated-Mechanism Mixture-of-Experts (GM-MoE)}, the first framework to introduce a mixture-of-experts network for low-light image enhancement. GM-MoE comprises a dynamic gated weight conditioning network and three sub-expert networks, each specializing in a distinct enhancement task. Combining a self-designed gated mechanism that dynamically adjusts the weights of the sub-expert networks for different data domains. Additionally, we integrate local and global feature fusion within sub-expert networks to enhance image quality by capturing multi-scale features. Experimental results demonstrate that the GM-MoE achieves superior generalization with respect to 25 compared approaches, reaching state-of-the-art performance on PSNR on 5 benchmarks and SSIM on 4 benchmarks, respectively.
Problem

Research questions and friction points this paper is trying to address.

Addresses generalization issues in low-light image enhancement.
Introduces GM-MoE for dynamic multi-task image enhancement.
Improves image quality by fusing local and global features.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Mixture-of-Experts network for enhancement
Dynamic gated mechanism adjusts sub-expert weights
Local and global feature fusion improves image quality
🔎 Similar Papers
No similar papers found.
H
Hao Bo Dong
HUC, China
X
Xinyi Wang
CUST, China
Ziyang Yan
Ziyang Yan
University of Central Florida | University of Trento | FBK
3D ReconstructionComputer VisionAIGC
Y
Yihua Shao
USTB, China