🤖 AI Summary
Prior evaluations of chest X-ray AI models lack cross-national, multi-center, and age-diverse validation, limiting assessment of real-world generalizability and clinical applicability.
Method: We conduct the first large-scale benchmarking study across nine multinational chest X-ray datasets spanning the U.S., Spain, India, Vietnam, and China, evaluating five vision-language foundation models and three CNNs. We propose knowledge-enhanced prompt engineering and structured supervised learning to establish a zero-shot and few-shot cross-dataset transfer evaluation framework.
Contribution/Results: Our MAVL model achieves mean AUROC of 0.82 on public datasets and 0.95 on private ones; ranks first on 14 of 37 standardized public tasks. Critically, we uncover a substantial performance drop in pediatric diagnosis (mean AUROC: 0.88 → 0.57), revealing pronounced age sensitivity. This work establishes a new paradigm and empirical benchmark for evaluating domain robustness and clinical deployment readiness of medical AI.
📝 Abstract
Foundation models leveraging vision-language pretraining have shown promise in chest X-ray (CXR) interpretation, yet their real-world performance across diverse populations and diagnostic tasks remains insufficiently evaluated. This study benchmarks the diagnostic performance and generalizability of foundation models versus traditional convolutional neural networks (CNNs) on multinational CXR datasets. We evaluated eight CXR diagnostic models - five vision-language foundation models and three CNN-based architectures - across 37 standardized classification tasks using six public datasets from the USA, Spain, India, and Vietnam, and three private datasets from hospitals in China. Performance was assessed using AUROC, AUPRC, and other metrics across both shared and dataset-specific tasks. Foundation models outperformed CNNs in both accuracy and task coverage. MAVL, a model incorporating knowledge-enhanced prompts and structured supervision, achieved the highest performance on public (mean AUROC: 0.82; AUPRC: 0.32) and private (mean AUROC: 0.95; AUPRC: 0.89) datasets, ranking first in 14 of 37 public and 3 of 4 private tasks. All models showed reduced performance on pediatric cases, with average AUROC dropping from 0.88 +/- 0.18 in adults to 0.57 +/- 0.29 in children (p = 0.0202). These findings highlight the value of structured supervision and prompt design in radiologic AI and suggest future directions including geographic expansion and ensemble modeling for clinical deployment. Code for all evaluated models is available at https://drive.google.com/drive/folders/1B99yMQm7bB4h1sVMIBja0RfUu8gLktCE