π€ AI Summary
To address the challenges of scarce annotated samples and poor generalization to novel classes in infrastructure defect detection, this paper proposes an attention-enhanced few-shot semantic segmentation framework. Methodologically, it innovatively integrates an Inception-SepConv module with mask-average pooling to generate robust class prototypes, and designs global, local, and cross-scale attention mechanisms to achieve multi-level feature alignment. Furthermore, it enhances the Feature Pyramid Network (termed E-FPN) with depthwise separable convolutions to improve multi-scale feature extraction and prototype matching efficiency. Evaluated on real-world tunnel and sewer defect datasets, the model achieves 82.55% F1-score and 72.26% mIoU. Ablation studies demonstrate that the self-attention modules yield absolute improvements of +2.57% in F1-score and +2.9% in mIoU over the baseline, validating the frameworkβs effectiveness and strong generalization capability in few-shot settings.
π Abstract
Few-shot semantic segmentation is vital for deep learning-based infrastructure inspection applications, where labeled training examples are scarce and expensive. Although existing deep learning frameworks perform well, the need for extensive labeled datasets and the inability to learn new defect categories with little data are problematic. We present our Enhanced Feature Pyramid Network (E-FPN) framework for few-shot semantic segmentation of culvert and sewer defect categories using a prototypical learning framework. Our approach has three main contributions: (1) adaptive E-FPN encoder using InceptionSepConv blocks and depth-wise separable convolutions for efficient multi-scale feature extraction; (2) prototypical learning with masked average pooling for powerful prototype generation from small support examples; and (3) attention-based feature representation through global self-attention, local self-attention and cross-attention. Comprehensive experimentation on challenging infrastructure inspection datasets illustrates that the method achieves excellent few-shot performance, with the best configuration being 8-way 5-shot training configuration at 82.55% F1-score and 72.26% mIoU in 2-way classification testing. The self-attention method had the most significant performance improvements, providing 2.57% F1-score and 2.9% mIoU gain over baselines. Our framework addresses the critical need to rapidly respond to new defect types in infrastructure inspection systems with limited new training data that lead to more efficient and economical maintenance plans for critical infrastructure systems.