WTEFNet: Real-Time Low-Light Object Detection for Advanced Driver-Assistance Systems

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the significant performance degradation of Advanced Driver Assistance Systems (ADAS) in low-light nighttime conditions, this paper proposes WTEFNet—a real-time object detection framework. Methodologically, it introduces a novel integration of wavelet-based multi-scale frequency-domain modeling with an illumination-semantic feature adaptive fusion mechanism, comprising three core modules: Low-Light Enhancement (LLE), Discrete Wavelet Feature Extraction (WFE), and Attention-Driven Adaptive Feature Fusion (AFFD), trained end-to-end. Key contributions include: (1) the first fully annotated, real-world driving dataset for nighttime scenarios—including rainy conditions—named GSN; (2) state-of-the-art detection accuracy on BDD100K, SHIFT, nuScenes, and GSN benchmarks; and (3) real-time inference exceeding 30 FPS on the Jetson AGX Orin platform, satisfying stringent automotive deployment requirements.

Technology Category

Application Category

📝 Abstract
Object detection is a cornerstone of environmental perception in advanced driver assistance systems(ADAS). However, most existing methods rely on RGB cameras, which suffer from significant performance degradation under low-light conditions due to poor image quality. To address this challenge, we proposes WTEFNet, a real-time object detection framework specifically designed for low-light scenarios, with strong adaptability to mainstream detectors. WTEFNet comprises three core modules: a Low-Light Enhancement (LLE) module, a Wavelet-based Feature Extraction (WFE) module, and an Adaptive Fusion Detection (AFFD) module. The LLE enhances dark regions while suppressing overexposed areas; the WFE applies multi-level discrete wavelet transforms to isolate high- and low-frequency components, enabling effective denoising and structural feature retention; the AFFD fuses semantic and illumination features for robust detection. To support training and evaluation, we introduce GSN, a manually annotated dataset covering both clear and rainy night-time scenes. Extensive experiments on BDD100K, SHIFT, nuScenes, and GSN demonstrate that WTEFNet achieves state-of-the-art accuracy under low-light conditions. Furthermore, deployment on a embedded platform (NVIDIA Jetson AGX Orin) confirms the framework's suitability for real-time ADAS applications.
Problem

Research questions and friction points this paper is trying to address.

Enhancing object detection in low-light ADAS scenarios
Real-time performance on embedded platforms for ADAS
Improving accuracy in rainy and night-time conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-Light Enhancement module improves dark regions
Wavelet-based Feature Extraction denoises and retains features
Adaptive Fusion Detection combines semantic and illumination features
🔎 Similar Papers
No similar papers found.
H
Hao Wu
Guangdong Provincial Key Laboratory of Intelligent Transportation System, School of Intelligent Systems Engineering, Sun Yat-sen University, Guangzhou 510275, China
J
Junzhou Chen
Guangdong Provincial Key Laboratory of Intelligent Transportation System, School of Intelligent Systems Engineering, Sun Yat-sen University, Guangzhou 510275, China
R
Ronghui Zhang
Guangdong Provincial Key Laboratory of Intelligent Transportation System, School of Intelligent Systems Engineering, Sun Yat-sen University, Guangzhou 510275, China
N
Nengchao Lyu
Intelligent Transportation Systems Research Center, Wuhan University of Technology, Wuhan 430063, China
H
Hongyu Hu
State Key Laboratory of Automotive Simulation and Control, Jilin University, Changchun 130022, China
Yanyong Guo
Yanyong Guo
Professor of Transportation Engineering, Southeast University
Road Traffic SafetyTraffic DesignPedestrian Behavior
T
Tony Z. Qiu
Department of Civil and Environmental Engineering, University of Alberta, Edmonton, Alberta, Canada