🤖 AI Summary
Low-light conditions in underwater environments cause blurred segmentation boundaries and severe detail loss in semantic segmentation. To address this, we propose UWSegFormer, a Transformer-based underwater semantic segmentation framework. Its key contributions include: (i) the Underwater Image Quality Attention (UIQA) module—the first of its kind—which adaptively enhances channel-wise representations in low-quality regions; (ii) the Multi-scale Aggregation Attention (MAA) module, which fuses cross-level features to recover fine-grained semantic details; and (iii) the Edge Learning Loss (ELL), which explicitly optimizes boundary-aware segmentation. Evaluated on SUIM and DUT-USEG benchmarks, UWSegFormer achieves state-of-the-art mean Intersection-over-Union (mIoU) scores of 82.12% and 71.41%, respectively. It significantly improves segmentation completeness, boundary sharpness, and detail fidelity, delivering a robust and high-precision solution for low-illumination underwater scenes.
📝 Abstract
Underwater image understanding is crucial for both submarine navigation and seabed exploration. However, the low illumination in underwater environments degrades the imaging quality, which in turn seriously deteriorates the performance of underwater semantic segmentation, particularly for outlining the object region boundaries. To tackle this issue, we present UnderWater SegFormer (UWSegFormer), a transformer-based framework for semantic segmentation of low-quality underwater images. Firstly, we propose the Underwater Image Quality Attention (UIQA) module. This module enhances the representation of highquality semantic information in underwater image feature channels through a channel self-attention mechanism. In order to address the issue of loss of imaging details due to the underwater environment, the Multi-scale Aggregation Attention(MAA) module is proposed. This module aggregates sets of semantic features at different scales by extracting discriminative information from high-level features,thus compensating for the semantic loss of detail in underwater objects. Finally, during training, we introduce Edge Learning Loss (ELL) in order to enhance the model's learning of underwater object edges and improve the model's prediction accuracy. Experiments conducted on the SUIM and DUT-USEG (DUT) datasets have demonstrated that the proposed method has advantages in terms of segmentation completeness, boundary clarity, and subjective perceptual details when compared to SOTA methods. In addition, the proposed method achieves the highest mIoU of 82.12 and 71.41 on the SUIM and DUT datasets, respectively. Code will be available at https://github.com/SAWRJJ/UWSegFormer.