🤖 AI Summary
This study systematically evaluates the population fairness of three AI-based lung cancer risk models—Sybil, Venkadesh21, and PanCan2b—on the National Lung Screening Trial (NLST) dataset, focusing on sex and racial subgroups.
Method: We employ multi-subgroup AUROC analysis, sensitivity/specificity threshold evaluation, confidence interval estimation, and rigorous confounding control—including adjustment for clinical covariates—to assess performance disparities. Guided by the JustEFAB ethical framework, we formally identify and define unexplained performance gaps—i.e., those not attributable to clinically relevant factors—as ethically unjust bias.
Contribution/Results: Sybil exhibits significantly higher AUROC in females than males (0.88 vs. 0.81; *p* < 0.001); Venkadesh21 shows markedly lower sensitivity in Black individuals (0.39) versus White individuals (0.69), a disparity robust to clinical covariate adjustment. PanCan2b demonstrates comparatively balanced performance. This work establishes a reproducible methodological framework and empirical benchmark for fairness validation of AI-driven lung cancer screening models.
📝 Abstract
Lung cancer is the leading cause of cancer-related mortality in adults worldwide. Screening high-risk individuals with annual low-dose CT (LDCT) can support earlier detection and reduce deaths, but widespread implementation may strain the already limited radiology workforce. AI models have shown potential in estimating lung cancer risk from LDCT scans. However, high-risk populations for lung cancer are diverse, and these models' performance across demographic groups remains an open question. In this study, we drew on the considerations on confounding factors and ethically significant biases outlined in the JustEFAB framework to evaluate potential performance disparities and fairness in two deep learning risk estimation models for lung cancer screening: the Sybil lung cancer risk model and the Venkadesh21 nodule risk estimator. We also examined disparities in the PanCan2b logistic regression model recommended in the British Thoracic Society nodule management guideline. Both deep learning models were trained on data from the US-based National Lung Screening Trial (NLST), and assessed on a held-out NLST validation set. We evaluated AUROC, sensitivity, and specificity across demographic subgroups, and explored potential confounding from clinical risk factors. We observed a statistically significant AUROC difference in Sybil's performance between women (0.88, 95% CI: 0.86, 0.90) and men (0.81, 95% CI: 0.78, 0.84, p < .001). At 90% specificity, Venkadesh21 showed lower sensitivity for Black (0.39, 95% CI: 0.23, 0.59) than White participants (0.69, 95% CI: 0.65, 0.73). These differences were not explained by available clinical confounders and thus may be classified as unfair biases according to JustEFAB. Our findings highlight the importance of improving and monitoring model performance across underrepresented subgroups, and further research on algorithmic fairness, in lung cancer screening.