🤖 AI Summary
This work addresses unsupervised anomaly detection in brain MRI, where no abnormal samples or pixel-level annotations are available. We propose Patch2Loc, a self-supervised framework that models spatial positional relationships among normal image patches to localize anomalies via prediction error and its variance—yielding pixel-level anomaly heatmaps for precise lesion localization and segmentation (e.g., tumors, white matter hyperintensities). To our knowledge, this is the first method to formulate patch localization as a core task in unsupervised medical image anomaly detection. By fusing error-based heatmaps with pixel-wise reconstruction outputs, Patch2Loc significantly improves segmentation accuracy. Extensive evaluation on four public benchmarks—BraTS2021, MSLUB, ATLAS, and WMH—demonstrates consistent superiority over existing unsupervised approaches, establishing new state-of-the-art performance across all datasets.
📝 Abstract
Detecting brain lesions as abnormalities observed in magnetic resonance imaging (MRI) is essential for diagnosis and treatment. In the search of abnormalities, such as tumors and malformations, radiologists may benefit from computer-aided diagnostics that use computer vision systems trained with machine learning to segment normal tissue from abnormal brain tissue. While supervised learning methods require annotated lesions, we propose a new unsupervised approach (Patch2Loc) that learns from normal patches taken from structural MRI. We train a neural network model to map a patch back to its spatial location within a slice of the brain volume. During inference, abnormal patches are detected by the relatively higher error and/or variance of the location prediction. This generates a heatmap that can be integrated into pixel-wise methods to achieve finer-grained segmentation. We demonstrate the ability of our model to segment abnormal brain tissues by applying our approach to the detection of tumor tissues in MRI on T2-weighted images from BraTS2021 and MSLUB datasets and T1-weighted images from ATLAS and WMH datasets. We show that it outperforms the state-of-the art in unsupervised segmentation. The codebase for this work can be found on our href{https://github.com/bakerhassan/Patch2Loc}{GitHub page}.