LFA-Net: A Lightweight Network with LiteFusion Attention for Retinal Vessel Segmentation

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of detecting fine retinal vessels, high computational overhead, and poor deployability in resource-constrained clinical settings, this paper proposes LFA-Net—a lightweight deep network for retinal vessel segmentation. Its core innovation is the LiteFusion-Attention module, which synergistically integrates residual connections, Vision Mamba–inspired dynamic state modeling, and modulated attention to efficiently capture both local structural details and global contextual dependencies. Designed with minimal parameter count, LFA-Net achieves Dice scores of 83.28%, 87.44%, and 84.50% on DRIVE, STARE, and CHASE_DB1, respectively. The model contains only 0.11 million parameters, occupies just 0.42 MB of storage, and incurs only 4.46 GFLOPs of computational cost—demonstrating an exceptional balance between segmentation accuracy and inference efficiency for real-world clinical deployment.

Technology Category

Application Category

📝 Abstract
Lightweight retinal vessel segmentation is important for the early diagnosis of vision-threatening and systemic diseases, especially in a real-world clinical environment with limited computational resources. Although segmentation methods based on deep learning are improving, existing models are still facing challenges of small vessel segmentation and high computational costs. To address these challenges, we proposed a new vascular segmentation network, LFA-Net, which incorporates a newly designed attention module, LiteFusion-Attention. This attention module incorporates residual learning connections, Vision Mamba-inspired dynamics, and modulation-based attention, enabling the model to capture local and global context efficiently and in a lightweight manner. LFA-Net offers high performance with 0.11 million parameters, 0.42 MB memory size, and 4.46 GFLOPs, which make it ideal for resource-constrained environments. We validated our proposed model on DRIVE, STARE, and CHASE_DB with outstanding performance in terms of dice scores of 83.28, 87.44, and 84.50% and Jaccard indices of 72.85, 79.31, and 74.70%, respectively. The code of LFA-Net is available online https://github.com/Mehwish4593/LFA-Net.
Problem

Research questions and friction points this paper is trying to address.

Addresses retinal vessel segmentation with limited computational resources
Improves small vessel segmentation accuracy in medical imaging
Reduces computational costs through lightweight attention mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

LiteFusion attention module for efficient context capture
Vision Mamba-inspired dynamics for lightweight global modeling
Residual learning connections with modulation-based attention
🔎 Similar Papers
No similar papers found.