Invisible Attributes, Visible Biases: Exploring Demographic Shortcuts in MRI-based Alzheimer's Disease Classification

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies pervasive shortcut learning and demographic bias in MRI-based deep learning models for Alzheimer’s disease diagnosis: models exploit protected attributes—such as race and sex—as spurious features, degrading generalization to underrepresented groups. Through systematic experiments with ResNet and Swin Transformer on multi-center 3D MRI datasets, augmented by quantitative attribution and visualization analyses, we provide the first empirical evidence that latent demographic distribution shifts in neuroimaging data are captured by models and induce bias. We propose the first fairness analysis framework tailored to neuroimaging AI, which explicitly localizes brain regions vulnerable to demographic bias and demonstrates that imbalanced demographic composition in training data significantly compromises model fairness. To foster reproducible, equitable AI research, we release all code and analysis tools publicly.

Technology Category

Application Category

📝 Abstract
Magnetic resonance imaging (MRI) is the gold standard for brain imaging. Deep learning (DL) algorithms have been proposed to aid in the diagnosis of diseases such as Alzheimer's disease (AD) from MRI scans. However, DL algorithms can suffer from shortcut learning, in which spurious features, not directly related to the output label, are used for prediction. When these features are related to protected attributes, they can lead to performance bias against underrepresented protected groups, such as those defined by race and sex. In this work, we explore the potential for shortcut learning and demographic bias in DL based AD diagnosis from MRI. We first investigate if DL algorithms can identify race or sex from 3D brain MRI scans to establish the presence or otherwise of race and sex based distributional shifts. Next, we investigate whether training set imbalance by race or sex can cause a drop in model performance, indicating shortcut learning and bias. Finally, we conduct a quantitative and qualitative analysis of feature attributions in different brain regions for both the protected attribute and AD classification tasks. Through these experiments, and using multiple datasets and DL models (ResNet and SwinTransformer), we demonstrate the existence of both race and sex based shortcut learning and bias in DL based AD classification. Our work lays the foundation for fairer DL diagnostic tools in brain MRI. The code is provided at https://github.com/acharaakshit/ShortMR
Problem

Research questions and friction points this paper is trying to address.

Detecting demographic bias in Alzheimer's MRI classification models
Investigating shortcut learning from race and sex attributes
Analyzing feature attribution disparities across protected groups
Innovation

Methods, ideas, or system contributions that make the work stand out.

Detects demographic shortcuts in MRI scans
Analyzes feature attributions across brain regions
Uses multiple datasets and deep learning models
A
Akshit Achara
School of Biomedical Engineering and Imaging Sciences, King’s College London, UK
E
Esther Puyol Anton
School of Biomedical Engineering and Imaging Sciences, King’s College London, UK
Alexander Hammers
Alexander Hammers
School of Biomedical Engineering and Imaging Sciences, King's College London
EpilepsyPET(-MRI)atlaseslarge axial field-of-view ("Total Body") PET
A
Andrew P. King
School of Biomedical Engineering and Imaging Sciences, King’s College London, UK