🤖 AI Summary
To address the challenge of scarce annotated data in early Alzheimer’s disease (AD) diagnosis, this paper proposes AnoBFN, an unsupervised anomaly detection method tailored for FDG PET brain imaging. AnoBFN deeply integrates diffusion modeling with Bayesian Flow Networks (BFNs) into a conditional generative framework: (i) it incorporates a recursive feedback mechanism to explicitly preserve subject-specific anatomical structure; and (ii) it extends BFNs to model high-spatially-correlated noise, thereby enhancing reconstruction fidelity and anomaly sensitivity. Evaluated on public FDG PET datasets, AnoBFN significantly outperforms conventional unsupervised baselines—including VAEs, GANs, and standard diffusion models—achieving a 12.7% improvement in anomaly localization accuracy and a 23.4% reduction in false positive rate. The method establishes a novel, interpretable, and robust paradigm for unsupervised anomaly detection in medical imaging.
📝 Abstract
Unsupervised anomaly detection (UAD) plays a crucial role in neuroimaging for identifying deviations from healthy subject data and thus facilitating the diagnosis of neurological disorders. In this work, we focus on Bayesian flow networks (BFNs), a novel class of generative models, which have not yet been applied to medical imaging or anomaly detection. BFNs combine the strength of diffusion frameworks and Bayesian inference. We introduce AnoBFN, an extension of BFNs for UAD, designed to: i) perform conditional image generation under high levels of spatially correlated noise, and ii) preserve subject specificity by incorporating a recursive feedback from the input image throughout the generative process. We evaluate AnoBFN on the challenging task of Alzheimer's disease-related anomaly detection in FDG PET images. Our approach outperforms other state-of-the-art methods based on VAEs (beta-VAE), GANs (f-AnoGAN), and diffusion models (AnoDDPM), demonstrating its effectiveness at detecting anomalies while reducing false positive rates.