🤖 AI Summary
Low-quality fundus images suffer from multi-scale information loss and insufficient lesion region enhancement. Method: This paper proposes the first end-to-end lightweight enhancement framework, integrating wavelet-based multi-scale encoding, a structure-preserving hierarchical group attention decoder, and an unsupervised targeted lesion-aware aggregation mechanism—enabling concurrent anatomical fidelity preservation and pathological region enhancement without requiring lesion annotations. Contribution/Results: Extensive experiments demonstrate significant PSNR/SSIM improvements over state-of-the-art methods across multiple public datasets, with a 37% reduction in model parameters. Moreover, the framework exhibits strong generalizability: zero-shot transfer to other ophthalmic imaging tasks achieves competitive performance, confirming its robustness and broad applicability in clinical fundus image enhancement.
📝 Abstract
High-quality fundus images provide essential anatomical information for clinical screening and ophthalmic disease diagnosis. Yet, due to hardware limitations, operational variability, and patient compliance, fundus images often suffer from low resolution and signal-to-noise ratio. Recent years have witnessed promising progress in fundus image enhancement. However, existing works usually focus on restoring structural details or global characteristics of fundus images, lacking a unified image enhancement framework to recover comprehensive multi-scale information. Moreover, few methods pinpoint the target of image enhancement, e.g., lesions, which is crucial for medical image-based diagnosis. To address these challenges, we propose a multi-scale target-aware representation learning framework (MTRL-FIE) for efficient fundus image enhancement. Specifically, we propose a multi-scale feature encoder (MFE) that employs wavelet decomposition to embed both low-frequency structural information and high-frequency details. Next, we design a structure-preserving hierarchical decoder (SHD) to fuse multi-scale feature embeddings for real fundus image restoration. SHD integrates hierarchical fusion and group attention mechanisms to achieve adaptive feature fusion while retaining local structural smoothness. Meanwhile, a target-aware feature aggregation (TFA) module is used to enhance pathological regions and reduce artifacts. Experimental results on multiple fundus image datasets demonstrate the effectiveness and generalizability of MTRL-FIE for fundus image enhancement. Compared to state-of-the-art methods, MTRL-FIE achieves superior enhancement performance with a more lightweight architecture. Furthermore, our approach generalizes to other ophthalmic image processing tasks without supervised fine-tuning, highlighting its potential for clinical applications.