🤖 AI Summary
This work addresses the severe degradation of infrared-visible images in marine environments caused by fog and strong reflections, a challenge exacerbated by the absence of end-to-end collaborative frameworks and real-world multimodal marine datasets. To tackle this, the authors propose a multi-task complementary learning framework (MCLF), introduce the first infrared-visible maritime ship dataset (IVMSD) tailored for marine scenes, and develop three key components: a frequency-spatial enhanced complementary (FSEC) module, a semantic-visual consistency attention (SVCA) module, and a cross-modal guided attention mechanism. These innovations jointly optimize image restoration, multimodal fusion, and semantic segmentation. Experimental results demonstrate that the proposed method significantly improves segmentation accuracy and enhances perception robustness under complex marine conditions on the IVMSD benchmark.
📝 Abstract
Marine scene understanding and segmentation plays a vital role in maritime monitoring and navigation safety. However, prevalent factors like fog and strong reflections in maritime environments cause severe image degradation, significantly compromising the stability of semantic perception. Existing restoration and enhancement methods typically target specific degradations or focus solely on visual quality, lacking end-to-end collaborative mechanisms that simultaneously improve structural recovery and semantic effectiveness. Moreover, publicly available infrared-visible datasets are predominantly collected from urban scenes, failing to capture the authentic characteristics of coupled degradations in marine environments. To address these challenges, the Infrared-Visible Maritime Ship Dataset (IVMSD) is proposed to cover various maritime scenarios under diverse weather and illumination conditions. Building upon this dataset, a Multi-task Complementary Learning Framework (MCLF) is proposed to collaboratively perform image restoration, multimodal fusion, and semantic segmentation within a unified architecture. The framework includes a Frequency-Spatial Enhancement Complementary (FSEC) module for degradation suppression and structural enhancement, a Semantic-Visual Consistency Attention (SVCA) module for semantic-consistent guidance, and a cross-modality guided attention mechanism for selective fusion. Experimental results on IVMSD demonstrate that the proposed method achieves state-of-the-art segmentation performance, significantly enhancing robustness and perceptual quality under complex maritime conditions.