🤖 AI Summary
This study addresses the limited pixel-level segmentation accuracy of lesions in early diabetic retinopathy (DR) screening. The authors propose an enhanced DeepLabV3+ architecture that, for the first time, integrates attention mechanisms into DR lesion segmentation by jointly leveraging semantic information and spatial attention to achieve precise delineation of microaneurysms, soft and hard exudates, and hemorrhages. Evaluated on the DDR dataset, the model improves mean average precision (mAP) from 0.3010 to 0.3326 and mean Intersection over Union (IoU) from 0.1791 to 0.1928. Notably, the detection performance for microaneurysms rises significantly to 0.0763, representing a clinically meaningful enhancement that advances the practicality of automated DR screening systems.
📝 Abstract
Diabetic Retinopathy (DR) is an eye disease which arises due to diabetes mellitus. It might cause vision loss and blindness. To prevent irreversible vision loss, early detection through systematic screening is crucial. Although researchers have developed numerous automated deep learning-based algorithms for DR screening, their clinical applicability remains limited, particularly in lesion segmentation. Our method provides pixel-level annotations for lesions, which practically supports Ophthalmologist to screen DR from fundus images. In this work, we segmented four types of DR-related lesions: microaneurysms, soft exudates, hard exudates, and hemorrhages on 757 images from DDR dataset. To enhance lesion segmentation, an attention mechanism was integrated with DeepLab-V3+. Compared to the baseline model, the Attention-DeepLab model increases mean average precision (mAP) from 0.3010 to 0.3326 and the mean Intersection over Union (IoU) from 0.1791 to 0.1928. The model also increased microaneurysm detection from 0.0205 to 0.0763, a clinically significant improvement. The detection of microaneurysms is the earliest visible symptom of DR.