Novel Extraction of Discriminative Fine-Grained Feature to Improve Retinal Vessel Segmentation

📅 2025-05-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Retinal vessel segmentation is critical for early diagnosis of ocular diseases, yet existing methods predominantly rely on pixel-wise supervision and neglect the extraction of fine-grained discriminative features within encoders. To address this, we propose AttUKAN—a U-shaped architecture integrating attention gating with the differentiable Kolmogorov–Arnold Network (KAN), marking the first incorporation of KAN as a backbone in medical image segmentation. Furthermore, we introduce a label-guided pixel-level contrastive loss that explicitly models discriminative relationships between foreground and background pixel pairs. Evaluated on DRIVE, STARE, CHASE_DB1, HRF, and a private dataset, AttUKAN achieves an F1 score of 82.50% and a mean Intersection-over-Union (mIoU) of 70.24%, consistently outperforming 11 state-of-the-art methods and establishing new benchmarks in retinal vessel segmentation performance.

Technology Category

Application Category

📝 Abstract
Retinal vessel segmentation is a vital early detection method for several severe ocular diseases. Despite significant progress in retinal vessel segmentation with the advancement of Neural Networks, there are still challenges to overcome. Specifically, retinal vessel segmentation aims to predict the class label for every pixel within a fundus image, with a primary focus on intra-image discrimination, making it vital for models to extract more discriminative features. Nevertheless, existing methods primarily focus on minimizing the difference between the output from the decoder and the label, but ignore fully using feature-level fine-grained representations from the encoder. To address these issues, we propose a novel Attention U-shaped Kolmogorov-Arnold Network named AttUKAN along with a novel Label-guided Pixel-wise Contrastive Loss for retinal vessel segmentation. Specifically, we implement Attention Gates into Kolmogorov-Arnold Networks to enhance model sensitivity by suppressing irrelevant feature activations and model interpretability by non-linear modeling of KAN blocks. Additionally, we also design a novel Label-guided Pixel-wise Contrastive Loss to supervise our proposed AttUKAN to extract more discriminative features by distinguishing between foreground vessel-pixel pairs and background pairs. Experiments are conducted across four public datasets including DRIVE, STARE, CHASE_DB1, HRF and our private dataset. AttUKAN achieves F1 scores of 82.50%, 81.14%, 81.34%, 80.21% and 80.09%, along with MIoU scores of 70.24%, 68.64%, 68.59%, 67.21% and 66.94% in the above datasets, which are the highest compared to 11 networks for retinal vessel segmentation. Quantitative and qualitative results show that our AttUKAN achieves state-of-the-art performance and outperforms existing retinal vessel segmentation methods. Our code will be available at https://github.com/stevezs315/AttUKAN.
Problem

Research questions and friction points this paper is trying to address.

Enhancing discriminative feature extraction for retinal vessel segmentation
Addressing neglect of fine-grained encoder features in existing methods
Improving intra-image discrimination in fundus image pixel classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attention U-shaped Kolmogorov-Arnold Network (AttUKAN)
Label-guided Pixel-wise Contrastive Loss
Enhanced feature extraction via Attention Gates
🔎 Similar Papers
No similar papers found.
Shuang Zeng
Shuang Zeng
Peking University, Georgia Institute of Technology
Self-supervised Contrastive LearningMedical Image SegmentationSuperpixelLarge Language Model
C
Chee Hong Lee
Department of Biomedical Engineering, Peking University, Beijing, China; Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China; National Biomedical Imaging Center, Peking University, Beijing, China; Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
M
Micky C. Nnamdi
School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
Wenqi Shi
Wenqi Shi
Assistant Professor, University of Texas Southwestern Medical Center
AI for HealthcareLLM AgentClinical Decision SupportClinical Informatics
J
J. B. Tamo
School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
L
Lei Zhu
Department of Biomedical Engineering, Peking University, Beijing, China; Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China; National Biomedical Imaging Center, Peking University, Beijing, China; Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
Hangzhou He
Hangzhou He
PhD student, Peking University
ExplainabilityMedical Image AnalysisTrustworthy AI
X
Xinliang Zhang
Department of Biomedical Engineering, Peking University, Beijing, China; Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China; National Biomedical Imaging Center, Peking University, Beijing, China; Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
Q
Qian Chen
Department of Biomedical Engineering, Peking University, Beijing, China; Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China; National Biomedical Imaging Center, Peking University, Beijing, China; Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
M
May D. Wang
Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
Yanye Lu
Yanye Lu
Peking University
Medical Imaging/Deep Learning/Machine Learning
Qiushi Ren
Qiushi Ren
Peking University