Large Language Model Evaluated Stand-alone Attention-Assisted Graph Neural Network with Spatial and Structural Information Interaction for Precise Endoscopic Image Segmentation

📅 2025-08-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Colorectal polyp segmentation remains challenging due to low contrast, specular highlights, and ill-defined boundaries. To address these issues, we propose FOCUS-Med—a novel framework that (1) introduces large language models (LLMs) for the first time to perform fine-grained, qualitative assessment of medical image segmentation quality; (2) designs a Dual Graph Convolutional Network (Dual-GCN) coupled with a position-fused independent self-attention mechanism to jointly model spatial and topological structural information; and (3) incorporates a learnable weighted fast normalization fusion strategy to enable multi-scale feature aggregation and global contextual enhancement. Evaluated on multiple public benchmarks, FOCUS-Med achieves state-of-the-art performance across five key metrics—including Dice and IoU—while significantly improving polyp boundary localization accuracy. These results demonstrate its effectiveness and clinical potential in AI-assisted colonoscopy diagnosis.

Technology Category

Application Category

📝 Abstract
Accurate endoscopic image segmentation on the polyps is critical for early colorectal cancer detection. However, this task remains challenging due to low contrast with surrounding mucosa, specular highlights, and indistinct boundaries. To address these challenges, we propose FOCUS-Med, which stands for Fusion of spatial and structural graph with attentional context-aware polyp segmentation in endoscopic medical imaging. FOCUS-Med integrates a Dual Graph Convolutional Network (Dual-GCN) module to capture contextual spatial and topological structural dependencies. This graph-based representation enables the model to better distinguish polyps from background tissues by leveraging topological cues and spatial connectivity, which are often obscured in raw image intensities. It enhances the model's ability to preserve boundaries and delineate complex shapes typical of polyps. In addition, a location-fused stand-alone self-attention is employed to strengthen global context integration. To bridge the semantic gap between encoder-decoder layers, we incorporate a trainable weighted fast normalized fusion strategy for efficient multi-scale aggregation. Notably, we are the first to introduce the use of a Large Language Model (LLM) to provide detailed qualitative evaluations of segmentation quality. Extensive experiments on public benchmarks demonstrate that FOCUS-Med achieves state-of-the-art performance across five key metrics, underscoring its effectiveness and clinical potential for AI-assisted colonoscopy.
Problem

Research questions and friction points this paper is trying to address.

Accurate polyp segmentation in low-contrast endoscopic images
Integrating spatial and structural dependencies for polyp delineation
Bridging semantic gaps in encoder-decoder layers for segmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-GCN module captures spatial and structural dependencies
Location-fused self-attention enhances global context integration
LLM provides qualitative evaluations of segmentation quality
🔎 Similar Papers
No similar papers found.