🤖 AI Summary
Addressing the challenge of unsupervised detection and precise localization of rare, minute abnormalities in medical imaging, this paper proposes a novel unsupervised Patch-GAN framework. The method integrates local masked reconstruction with fine-grained patch-level discrimination, leveraging a context-aware, target-oriented patch ranking mechanism to enhance local anomaly sensitivity while preserving global semantic consistency. Key contributions include: (i) the first unsupervised Patch-GAN architecture that abandons pixel-level reconstruction constraints and instead models patch-level novelty distributions; and (ii) a learnable patch importance ranking module enabling pixel-level anomaly localization. Evaluated on ISIC 2016 and BraTS 2019, the method achieves AUC scores of 95.79% and 96.05%, respectively—substantially outperforming three state-of-the-art unsupervised approaches.
📝 Abstract
Detecting novel anomalies in medical imaging is challenging due to the limited availability of labeled data for rare abnormalities, which often display high variability and subtlety. This challenge is further compounded when small abnormal regions are embedded within larger normal areas, as whole-image predictions frequently overlook these subtle deviations. To address these issues, we propose an unsupervised Patch-GAN framework designed to detect and localize anomalies by capturing both local detail and global structure. Our framework first reconstructs masked images to learn fine-grained, normal-specific features, allowing for enhanced sensitivity to minor deviations from normality. By dividing these reconstructed images into patches and assessing the authenticity of each patch, our approach identifies anomalies at a more granular level, overcoming the limitations of whole-image evaluation. Additionally, a patch-ranking mechanism prioritizes regions with higher abnormal scores, reinforcing the alignment between local patch discrepancies and the global image context. Experimental results on the ISIC 2016 skin lesion and BraTS 2019 brain tumor datasets validate our framework's effectiveness, achieving AUCs of 95.79% and 96.05%, respectively, and outperforming three state-of-the-art baselines.