Bias and Generalizability of Foundation Models across Datasets in Breast Mammography

📅 2025-05-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates cross-dataset fairness and generalization bottlenecks of foundation models (FMs) in mammographic screening, focusing on spurious correlations and subgroup disparities induced by image quality variability, annotation uncertainty, and sensitive attributes (e.g., breast density, age). We identify that modality-specific pretraining—while improving overall performance—exacerbates subgroup inequality, and demonstrate that naive data aggregation fails to mitigate performance gaps for extreme-density and elderly subgroups. To address these challenges, we propose an integrated framework incorporating multi-center data harmonization, domain shift analysis, adversarial debiasing, and fairness-aware reweighting. Evaluated across six heterogeneous datasets, our approach achieves an average AUC gain of 8.2%, reduces performance variance for high-density and elderly subgroups by 37%, improves the Equalized Odds Difference (EODD) fairness metric by 51%, and maintains baseline accuracy—without sacrificing predictive performance.

Technology Category

Application Category

📝 Abstract
Over the past decades, computer-aided diagnosis tools for breast cancer have been developed to enhance screening procedures, yet their clinical adoption remains challenged by data variability and inherent biases. Although foundation models (FMs) have recently demonstrated impressive generalizability and transfer learning capabilities by leveraging vast and diverse datasets, their performance can be undermined by spurious correlations that arise from variations in image quality, labeling uncertainty, and sensitive patient attributes. In this work, we explore the fairness and bias of FMs for breast mammography classification by leveraging a large pool of datasets from diverse sources-including data from underrepresented regions and an in-house dataset. Our extensive experiments show that while modality-specific pre-training of FMs enhances performance, classifiers trained on features from individual datasets fail to generalize across domains. Aggregating datasets improves overall performance, yet does not fully mitigate biases, leading to significant disparities across under-represented subgroups such as extreme breast densities and age groups. Furthermore, while domain-adaptation strategies can reduce these disparities, they often incur a performance trade-off. In contrast, fairness-aware techniques yield more stable and equitable performance across subgroups. These findings underscore the necessity of incorporating rigorous fairness evaluations and mitigation strategies into FM-based models to foster inclusive and generalizable AI.
Problem

Research questions and friction points this paper is trying to address.

Assessing bias in breast mammography foundation models across datasets
Evaluating generalizability of models with diverse and underrepresented data
Mitigating performance disparities in subgroups via fairness-aware techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modality-specific pre-training enhances FM performance
Fairness-aware techniques ensure stable subgroup performance
Aggregating datasets improves overall model generalizability
🔎 Similar Papers
No similar papers found.
Elodie Germani
Elodie Germani
Universitätsklinikum Bonn
NeuroimagingfMRImachine learningreproducibilitystatistics
I
Ilayda Selin-Türk
TUM School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
F
Fatima Zeineddine
Department of Diagnostic Imaging and Interventional Therapeutics, Lebanese Hospital Geitaoui, Beyrouth, Lebanon
C
Charbel Mourad
Department of Diagnostic Imaging and Interventional Therapeutics, Lebanese Hospital Geitaoui, Beyrouth, Lebanon
Shadi Albarqouni
Shadi Albarqouni
Professor of Computational Medical Imaging Research @Uni. Bonn | AI Group Leader @HelmholtzAI
Machine LearningDeep LearningFederated LearningMedical Image AnalysisMedical Image Computing