Benchmarking Chest X-ray Diagnosis Models Across Multinational Datasets

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior evaluations of chest X-ray AI models lack cross-national, multi-center, and age-diverse validation, limiting assessment of real-world generalizability and clinical applicability. Method: We conduct the first large-scale benchmarking study across nine multinational chest X-ray datasets spanning the U.S., Spain, India, Vietnam, and China, evaluating five vision-language foundation models and three CNNs. We propose knowledge-enhanced prompt engineering and structured supervised learning to establish a zero-shot and few-shot cross-dataset transfer evaluation framework. Contribution/Results: Our MAVL model achieves mean AUROC of 0.82 on public datasets and 0.95 on private ones; ranks first on 14 of 37 standardized public tasks. Critically, we uncover a substantial performance drop in pediatric diagnosis (mean AUROC: 0.88 → 0.57), revealing pronounced age sensitivity. This work establishes a new paradigm and empirical benchmark for evaluating domain robustness and clinical deployment readiness of medical AI.

Technology Category

Application Category

📝 Abstract
Foundation models leveraging vision-language pretraining have shown promise in chest X-ray (CXR) interpretation, yet their real-world performance across diverse populations and diagnostic tasks remains insufficiently evaluated. This study benchmarks the diagnostic performance and generalizability of foundation models versus traditional convolutional neural networks (CNNs) on multinational CXR datasets. We evaluated eight CXR diagnostic models - five vision-language foundation models and three CNN-based architectures - across 37 standardized classification tasks using six public datasets from the USA, Spain, India, and Vietnam, and three private datasets from hospitals in China. Performance was assessed using AUROC, AUPRC, and other metrics across both shared and dataset-specific tasks. Foundation models outperformed CNNs in both accuracy and task coverage. MAVL, a model incorporating knowledge-enhanced prompts and structured supervision, achieved the highest performance on public (mean AUROC: 0.82; AUPRC: 0.32) and private (mean AUROC: 0.95; AUPRC: 0.89) datasets, ranking first in 14 of 37 public and 3 of 4 private tasks. All models showed reduced performance on pediatric cases, with average AUROC dropping from 0.88 +/- 0.18 in adults to 0.57 +/- 0.29 in children (p = 0.0202). These findings highlight the value of structured supervision and prompt design in radiologic AI and suggest future directions including geographic expansion and ensemble modeling for clinical deployment. Code for all evaluated models is available at https://drive.google.com/drive/folders/1B99yMQm7bB4h1sVMIBja0RfUu8gLktCE
Problem

Research questions and friction points this paper is trying to address.

Evaluating chest X-ray models across diverse populations
Comparing foundation models versus CNNs on multinational datasets
Assessing AI performance gaps in pediatric cases
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-language pretraining for CXR diagnosis
Knowledge-enhanced prompts and structured supervision
Benchmarking across multinational datasets
🔎 Similar Papers
No similar papers found.
Q
Qinmei Xu
Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University, Stanford, CA, USA
Y
Yiheng Li
Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University, Stanford, CA, USA
Xianghao Zhan
Xianghao Zhan
Meta, Stanford University, Samsung Research America, Zhejiang University
Traumatic Brain InjuryBCIHealth SensorsBiomedical InformaticsML Uncertainty
A
Ahmet Gorkem Er
Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University, Stanford, CA, USA
B
Brittany Dashevsky
Stanford Breast Imaging, Department of Radiology, Stanford University, Stanford, CA, USA
C
Chuanjun Xu
Department of Radiology, the Second Hospital of Nanjing, Nanjing University of Chinese Medicine, Nanjing, China
M
Mohammed Alawad
National Center for AI (NCAI), Saudi Data and AI Authority (SDAIA), Riyadh, Saudi Arabia
M
Mengya Yang
Department of Radiology, Jinling Hospital, Nanjing, Jiangsu, China
L
Liu Ya
Department of Radiology, Jinling Hospital, Nanjing, Jiangsu, China
Changsheng Zhou
Changsheng Zhou
Department of Radiology, Jinling Hospital, Nanjing, Jiangsu, China
X
Xiao Li
Department of Radiology, Jinling Hospital, Nanjing, Jiangsu, China
H
Haruka Itakura
Division of Oncology, Department of Medicine, Stanford University, Stanford, CA, USA
Olivier Gevaert
Olivier Gevaert
Stanford University
machine learningbioinformaticsepigenomicsradiogenomicsdigital twins