🤖 AI Summary
Colorectal polyp segmentation remains challenging due to low contrast, specular highlights, and ill-defined boundaries. To address these issues, we propose FOCUS-Med—a novel framework that (1) introduces large language models (LLMs) for the first time to perform fine-grained, qualitative assessment of medical image segmentation quality; (2) designs a Dual Graph Convolutional Network (Dual-GCN) coupled with a position-fused independent self-attention mechanism to jointly model spatial and topological structural information; and (3) incorporates a learnable weighted fast normalization fusion strategy to enable multi-scale feature aggregation and global contextual enhancement. Evaluated on multiple public benchmarks, FOCUS-Med achieves state-of-the-art performance across five key metrics—including Dice and IoU—while significantly improving polyp boundary localization accuracy. These results demonstrate its effectiveness and clinical potential in AI-assisted colonoscopy diagnosis.
📝 Abstract
Accurate endoscopic image segmentation on the polyps is critical for early colorectal cancer detection. However, this task remains challenging due to low contrast with surrounding mucosa, specular highlights, and indistinct boundaries. To address these challenges, we propose FOCUS-Med, which stands for Fusion of spatial and structural graph with attentional context-aware polyp segmentation in endoscopic medical imaging. FOCUS-Med integrates a Dual Graph Convolutional Network (Dual-GCN) module to capture contextual spatial and topological structural dependencies. This graph-based representation enables the model to better distinguish polyps from background tissues by leveraging topological cues and spatial connectivity, which are often obscured in raw image intensities. It enhances the model's ability to preserve boundaries and delineate complex shapes typical of polyps. In addition, a location-fused stand-alone self-attention is employed to strengthen global context integration. To bridge the semantic gap between encoder-decoder layers, we incorporate a trainable weighted fast normalized fusion strategy for efficient multi-scale aggregation. Notably, we are the first to introduce the use of a Large Language Model (LLM) to provide detailed qualitative evaluations of segmentation quality. Extensive experiments on public benchmarks demonstrate that FOCUS-Med achieves state-of-the-art performance across five key metrics, underscoring its effectiveness and clinical potential for AI-assisted colonoscopy.