Rethinking Nighttime Image Deraining via Learnable Color Space Transformation

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Nighttime image deraining is highly challenging due to the strong coupling between rain streaks and low illumination, as well as the scarcity of high-quality labeled data. To address these issues, we introduce HQ-NightRain—the first high-fidelity benchmark specifically designed for nighttime deraining—and propose CST-Net. Our method enhances rain modeling in the Y channel via a learnable color-space transformation, integrates a Y-channel attention mechanism, and incorporates an implicit illumination-guided module to explicitly decouple and jointly optimize rain removal and illumination recovery. Extensive experiments on multiple nighttime deraining benchmarks demonstrate that CST-Net significantly outperforms existing state-of-the-art methods. Both quantitative metrics and qualitative visual results confirm its superior capability in eliminating complex rain artifacts while preserving fine details and structural integrity. The code and dataset are publicly available.

Technology Category

Application Category

📝 Abstract
Compared to daytime image deraining, nighttime image deraining poses significant challenges due to inherent complexities of nighttime scenarios and the lack of high-quality datasets that accurately represent the coupling effect between rain and illumination. In this paper, we rethink the task of nighttime image deraining and contribute a new high-quality benchmark, HQ-NightRain, which offers higher harmony and realism compared to existing datasets. In addition, we develop an effective Color Space Transformation Network (CST-Net) for better removing complex rain from nighttime scenes. Specifically, we propose a learnable color space converter (CSC) to better facilitate rain removal in the Y channel, as nighttime rain is more pronounced in the Y channel compared to the RGB color space. To capture illumination information for guiding nighttime deraining, implicit illumination guidance is introduced enabling the learned features to improve the model's robustness in complex scenarios. Extensive experiments show the value of our dataset and the effectiveness of our method. The source code and datasets are available at https://github.com/guanqiyuan/CST-Net.
Problem

Research questions and friction points this paper is trying to address.

Addressing nighttime image deraining challenges with complex rain-illumination coupling
Developing high-quality benchmark dataset for realistic nighttime rain scenarios
Proposing color space transformation network for improved rain removal
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learnable color space transformation for rain removal
Y channel rain removal in nighttime images
Implicit illumination guidance for robust deraining
🔎 Similar Papers
No similar papers found.
Q
Qiyuan Guan
Dalian Polytechnic University
X
Xiang Chen
Nanjing University of Science and Technology
G
Guiyue Jin
Dalian Polytechnic University
J
Jiyu Jin
Dalian Polytechnic University
S
Shumin Fan
Dalian Maritime University
Tianyu Song
Tianyu Song
Technical University of Munich
Augmented RealityRoboticsImage-Guided InterventionsComputer Vision
Jinshan Pan
Jinshan Pan
Nanjing University of Science and Technology
Computer VisionImage ProcessingComputational PhotographyMachine Learning