🤖 AI Summary
To address the degradation of salient object detection (SOD) performance under three common imaging conditions—nighttime haze, daytime haze, and low illumination—this paper proposes a unified multi-task enhancement network. The method introduces a novel task-oriented node learning mechanism and a multi-receptive-field enhancement module, enabling joint optimization of daytime dehazing, nighttime dehazing, and low-light enhancement within a single network for the first time. It incorporates self-attention to strengthen fine-detail recovery in nighttime scenes and employs parallel three-branch depthwise separable convolutions for multi-scale feature extraction, coupled with a hybrid loss function combining L1, perceptual, and contrast constraints. Extensive experiments on diverse real-world datasets demonstrate significant improvements over state-of-the-art methods in both SOD accuracy and robustness. The source code is publicly available.
📝 Abstract
Salient object detection (SOD) plays a critical role in vision-driven measurement systems (VMS), facilitating the detection and segmentation of key visual elements in an image. However, adverse imaging conditions such as haze during the day, low light, and haze at night severely degrade image quality, and complicating the SOD process. To address these challenges, we propose a multi-task-oriented nighttime haze imaging enhancer (MToIE), which integrates three tasks: daytime dehazing, low-light enhancement, and nighttime dehazing. The MToIE incorporates two key innovative components: First, the network employs a task-oriented node learning mechanism to handle three specific degradation types: day-time haze, low light, and night-time haze conditions, with an embedded self-attention module enhancing its performance in nighttime imaging. In addition, multi-receptive field enhancement module that efficiently extracts multi-scale features through three parallel depthwise separable convolution branches with different dilation rates, capturing comprehensive spatial information with minimal computational overhead. To ensure optimal image reconstruction quality and visual characteristics, we suggest a hybrid loss function. Extensive experiments on different types of weather/imaging conditions illustrate that MToIE surpasses existing methods, significantly enhancing the accuracy and reliability of vision systems across diverse imaging scenarios. The code is available at https://github.com/Ai-Chen-Lab/MToIE.