๐ค AI Summary
To address the poor robustness of colonoscopic polyp segmentation in multi-center, multi-modal clinical settings, this paper proposes a novel Transformer-based segmentation framework. Methodologically, it introduces three key innovations: (1) a Focused Attention Module (FAM) that uniquely integrates local attention with pooling-based attention; (2) a synergistic architecture comprising a Cross-Semantic Interaction Decoder (CIDM) and a Detail Enhancement Module (DEM), enabling unified global context modeling while preserving fine-grained local texture; and (3) native support for joint training and inference across five endoscopic modalitiesโBLI, FICE, LCI, NBI, and WLI. Evaluated on the multi-center, multi-modal PolypDB benchmark, the method achieves state-of-the-art Dice scores of 93.42% (WLI) and 92.04% (LCI), outperforming all prior approaches. The source code is publicly available.
๐ Abstract
Colonoscopy is vital in the early diagnosis of colorectal polyps. Regular screenings can effectively prevent benign polyps from progressing to CRC. While deep learning has made impressive strides in polyp segmentation, most existing models are trained on single-modality and single-center data, making them less effective in real-world clinical environments. To overcome these limitations, we propose FocusNet, a Transformer-enhanced focus attention network designed to improve polyp segmentation. FocusNet incorporates three essential modules: the Cross-semantic Interaction Decoder Module (CIDM) for generating coarse segmentation maps, the Detail Enhancement Module (DEM) for refining shallow features, and the Focus Attention Module (FAM), to balance local detail and global context through local and pooling attention mechanisms. We evaluate our model on PolypDB, a newly introduced dataset with multi-modality and multi-center data for building more reliable segmentation methods. Extensive experiments showed that FocusNet consistently outperforms existing state-of-the-art approaches with a high dice coefficients of 82.47% on the BLI modality, 88.46% on FICE, 92.04% on LCI, 82.09% on the NBI and 93.42% on WLI modality, demonstrating its accuracy and robustness across five different modalities. The source code for FocusNet is available at https://github.com/JunZengz/FocusNet.