Unified Restoration-Perception Learning: Maritime Infrared-Visible Image Fusion and Segmentation

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the severe degradation of infrared-visible images in marine environments caused by fog and strong reflections, a challenge exacerbated by the absence of end-to-end collaborative frameworks and real-world multimodal marine datasets. To tackle this, the authors propose a multi-task complementary learning framework (MCLF), introduce the first infrared-visible maritime ship dataset (IVMSD) tailored for marine scenes, and develop three key components: a frequency-spatial enhanced complementary (FSEC) module, a semantic-visual consistency attention (SVCA) module, and a cross-modal guided attention mechanism. These innovations jointly optimize image restoration, multimodal fusion, and semantic segmentation. Experimental results demonstrate that the proposed method significantly improves segmentation accuracy and enhances perception robustness under complex marine conditions on the IVMSD benchmark.
📝 Abstract
Marine scene understanding and segmentation plays a vital role in maritime monitoring and navigation safety. However, prevalent factors like fog and strong reflections in maritime environments cause severe image degradation, significantly compromising the stability of semantic perception. Existing restoration and enhancement methods typically target specific degradations or focus solely on visual quality, lacking end-to-end collaborative mechanisms that simultaneously improve structural recovery and semantic effectiveness. Moreover, publicly available infrared-visible datasets are predominantly collected from urban scenes, failing to capture the authentic characteristics of coupled degradations in marine environments. To address these challenges, the Infrared-Visible Maritime Ship Dataset (IVMSD) is proposed to cover various maritime scenarios under diverse weather and illumination conditions. Building upon this dataset, a Multi-task Complementary Learning Framework (MCLF) is proposed to collaboratively perform image restoration, multimodal fusion, and semantic segmentation within a unified architecture. The framework includes a Frequency-Spatial Enhancement Complementary (FSEC) module for degradation suppression and structural enhancement, a Semantic-Visual Consistency Attention (SVCA) module for semantic-consistent guidance, and a cross-modality guided attention mechanism for selective fusion. Experimental results on IVMSD demonstrate that the proposed method achieves state-of-the-art segmentation performance, significantly enhancing robustness and perceptual quality under complex maritime conditions.
Problem

Research questions and friction points this paper is trying to address.

maritime image degradation
infrared-visible image fusion
semantic segmentation
image restoration
multimodal perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

infrared-visible image fusion
maritime image restoration
unified multi-task learning
semantic segmentation
cross-modality attention
🔎 Similar Papers
No similar papers found.
W
Weichao Cai
Xiamen University, Xiamen, Fujian 361005, China
W
Weiliang Huang
University of Macau, Macau, China
B
Biao Xue
Xiamen University, Xiamen, Fujian 361005, China
C
Chao Huang
Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China
Fei Yuan
Fei Yuan
Minnesota State University, Mankato
remote sensingGISenvironmental monitoring and assessmentnatural resource mapping
Bob Zhang
Bob Zhang
University of Macau
Biometricspattern recognitionimage processing