Integrating Non-Linear Radon Transformation for Diabetic Retinopathy Grading

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address inaccurate early grading of diabetic retinopathy (DR) caused by irregular retinal morphology and subtle lesion characteristics in fundus images, this paper proposes RadFuse—a novel framework for fine-grained clinical DR grading. Its core innovation lies in the first-ever design of a nonlinear RadEx transform, specifically tailored to the irregular spatial distribution of retinal lesions, which maps raw fundus images into discriminative sinogram representations. RadFuse further enables multimodal joint learning across both the spatial domain and the Radon-transformed domain to support five-level clinical grading. Integrated into backbone networks such as ResNeXt-50, RadFuse achieves a weighted kappa score of 93.24% and binary classification accuracy of 99.09% on the APTOS-2019 and DDR datasets—surpassing all existing state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Diabetic retinopathy is a serious ocular complication that poses a significant threat to patients' vision and overall health. Early detection and accurate grading are essential to prevent vision loss. Current automatic grading methods rely heavily on deep learning applied to retinal fundus images, but the complex, irregular patterns of lesions in these images, which vary in shape and distribution, make it difficult to capture subtle changes. This study introduces RadFuse, a multi-representation deep learning framework that integrates non-linear RadEx-transformed sinogram images with traditional fundus images to enhance diabetic retinopathy detection and grading. Our RadEx transformation, an optimized non-linear extension of the Radon transform, generates sinogram representations to capture complex retinal lesion patterns. By leveraging both spatial and transformed domain information, RadFuse enriches the feature set available to deep learning models, improving the differentiation of severity levels. We conducted extensive experiments on two benchmark datasets, APTOS-2019 and DDR, using three convolutional neural networks (CNNs): ResNeXt-50, MobileNetV2, and VGG19. RadFuse showed significant improvements over fundus-image-only models across all three CNN architectures and outperformed state-of-the-art methods on both datasets. For severity grading across five stages, RadFuse achieved a quadratic weighted kappa of 93.24%, an accuracy of 87.07%, and an F1-score of 87.17%. In binary classification between healthy and diabetic retinopathy cases, the method reached an accuracy of 99.09%, precision of 98.58%, and recall of 99.6%, surpassing previously established models. These results demonstrate RadFuse's capacity to capture complex non-linear features, advancing diabetic retinopathy classification and promoting the integration of advanced mathematical transforms in medical image analysis.
Problem

Research questions and friction points this paper is trying to address.

Enhancing diabetic retinopathy grading using multi-representation deep learning
Capturing complex retinal lesion patterns with non-linear RadEx transformation
Improving severity level differentiation via spatial and transformed domain fusion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates non-linear RadEx-transformed sinogram images
Combines spatial and transformed domain information
Outperforms fundus-image-only models significantly
🔎 Similar Papers
No similar papers found.
Farida Mohsen
Farida Mohsen
Postdoctoral Researcher, HBKU, College of Science and Engineering
Artificial IntelligenceMedical AImedical imagingNLPHuman-Robot Interaction
S
Samir Belhaouari
College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
Z
Zubair Shah
College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar