🤖 AI Summary
This study systematically investigates cross-dataset fairness and generalization bottlenecks of foundation models (FMs) in mammographic screening, focusing on spurious correlations and subgroup disparities induced by image quality variability, annotation uncertainty, and sensitive attributes (e.g., breast density, age). We identify that modality-specific pretraining—while improving overall performance—exacerbates subgroup inequality, and demonstrate that naive data aggregation fails to mitigate performance gaps for extreme-density and elderly subgroups. To address these challenges, we propose an integrated framework incorporating multi-center data harmonization, domain shift analysis, adversarial debiasing, and fairness-aware reweighting. Evaluated across six heterogeneous datasets, our approach achieves an average AUC gain of 8.2%, reduces performance variance for high-density and elderly subgroups by 37%, improves the Equalized Odds Difference (EODD) fairness metric by 51%, and maintains baseline accuracy—without sacrificing predictive performance.
📝 Abstract
Over the past decades, computer-aided diagnosis tools for breast cancer have been developed to enhance screening procedures, yet their clinical adoption remains challenged by data variability and inherent biases. Although foundation models (FMs) have recently demonstrated impressive generalizability and transfer learning capabilities by leveraging vast and diverse datasets, their performance can be undermined by spurious correlations that arise from variations in image quality, labeling uncertainty, and sensitive patient attributes. In this work, we explore the fairness and bias of FMs for breast mammography classification by leveraging a large pool of datasets from diverse sources-including data from underrepresented regions and an in-house dataset. Our extensive experiments show that while modality-specific pre-training of FMs enhances performance, classifiers trained on features from individual datasets fail to generalize across domains. Aggregating datasets improves overall performance, yet does not fully mitigate biases, leading to significant disparities across under-represented subgroups such as extreme breast densities and age groups. Furthermore, while domain-adaptation strategies can reduce these disparities, they often incur a performance trade-off. In contrast, fairness-aware techniques yield more stable and equitable performance across subgroups. These findings underscore the necessity of incorporating rigorous fairness evaluations and mitigation strategies into FM-based models to foster inclusive and generalizable AI.