Fairness Evaluation of Risk Estimation Models for Lung Cancer Screening

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the population fairness of three AI-based lung cancer risk models—Sybil, Venkadesh21, and PanCan2b—on the National Lung Screening Trial (NLST) dataset, focusing on sex and racial subgroups. Method: We employ multi-subgroup AUROC analysis, sensitivity/specificity threshold evaluation, confidence interval estimation, and rigorous confounding control—including adjustment for clinical covariates—to assess performance disparities. Guided by the JustEFAB ethical framework, we formally identify and define unexplained performance gaps—i.e., those not attributable to clinically relevant factors—as ethically unjust bias. Contribution/Results: Sybil exhibits significantly higher AUROC in females than males (0.88 vs. 0.81; *p* < 0.001); Venkadesh21 shows markedly lower sensitivity in Black individuals (0.39) versus White individuals (0.69), a disparity robust to clinical covariate adjustment. PanCan2b demonstrates comparatively balanced performance. This work establishes a reproducible methodological framework and empirical benchmark for fairness validation of AI-driven lung cancer screening models.

Technology Category

Application Category

📝 Abstract
Lung cancer is the leading cause of cancer-related mortality in adults worldwide. Screening high-risk individuals with annual low-dose CT (LDCT) can support earlier detection and reduce deaths, but widespread implementation may strain the already limited radiology workforce. AI models have shown potential in estimating lung cancer risk from LDCT scans. However, high-risk populations for lung cancer are diverse, and these models' performance across demographic groups remains an open question. In this study, we drew on the considerations on confounding factors and ethically significant biases outlined in the JustEFAB framework to evaluate potential performance disparities and fairness in two deep learning risk estimation models for lung cancer screening: the Sybil lung cancer risk model and the Venkadesh21 nodule risk estimator. We also examined disparities in the PanCan2b logistic regression model recommended in the British Thoracic Society nodule management guideline. Both deep learning models were trained on data from the US-based National Lung Screening Trial (NLST), and assessed on a held-out NLST validation set. We evaluated AUROC, sensitivity, and specificity across demographic subgroups, and explored potential confounding from clinical risk factors. We observed a statistically significant AUROC difference in Sybil's performance between women (0.88, 95% CI: 0.86, 0.90) and men (0.81, 95% CI: 0.78, 0.84, p < .001). At 90% specificity, Venkadesh21 showed lower sensitivity for Black (0.39, 95% CI: 0.23, 0.59) than White participants (0.69, 95% CI: 0.65, 0.73). These differences were not explained by available clinical confounders and thus may be classified as unfair biases according to JustEFAB. Our findings highlight the importance of improving and monitoring model performance across underrepresented subgroups, and further research on algorithmic fairness, in lung cancer screening.
Problem

Research questions and friction points this paper is trying to address.

Evaluates fairness of AI lung cancer risk models across demographics
Assesses performance disparities in deep learning models for screening
Investigates potential unfair biases in lung cancer risk estimation algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated fairness of deep learning lung cancer risk models
Used JustEFAB framework to assess performance disparities
Analyzed demographic subgroup differences in AUROC and sensitivity
🔎 Similar Papers
No similar papers found.
S
Shaurya Gaur
Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
M
Michel Vitale
Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
Alessa Hering
Alessa Hering
Radboud University Medical Center
Deep LearningImage RegistrationTumor Follow-UpLLM
Johan Kwisthout
Johan Kwisthout
Full Professor, Radboud University Nijmegen, Donders Center for Cognition
Bayesian networksApproximate InferenceComplexity in PGMs
Colin Jacobs
Colin Jacobs
Associate Professor in AI for Thoracic Oncology, Radboudumc, Nijmegen, The Netherlands
Medical Image AnalysisMachine LearningComputer-aided DiagnosisDeep LearningMedical Imaging
L
Lena Philipp
Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
F
Fennie van der Graaf
Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands