Closing the Performance Gap Between AI and Radiologists in Chest X-Ray Reporting

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the performance gap between AI systems and radiologists in chest X-ray report generation, with emphasis on pathological description accuracy and modeling the “finding-to-explanation” (L&T) logical chain. To this end, we propose MAIRA-X, a novel multimodal AI model that jointly processes imaging and textual inputs for end-to-end report generation—the first of its kind—and introduce the first clinical evaluation framework explicitly designed for L&T structural fidelity. Trained and validated on 3.1 million multicenter longitudinal cases, MAIRA-X achieves a critical error rate of only 4.6% and an acceptable-sentence rate of 97.4% in real-world retrospective studies, matching radiologist-level performance. Results demonstrate significant improvements in clinical consistency and generation efficiency, establishing a deployable technical pathway for intelligent diagnostic assistance in high-workload clinical settings.

Technology Category

Application Category

📝 Abstract
AI-assisted report generation offers the opportunity to reduce radiologists' workload stemming from expanded screening guidelines, complex cases and workforce shortages, while maintaining diagnostic accuracy. In addition to describing pathological findings in chest X-ray reports, interpreting lines and tubes (L&T) is demanding and repetitive for radiologists, especially with high patient volumes. We introduce MAIRA-X, a clinically evaluated multimodal AI model for longitudinal chest X-ray (CXR) report generation, that encompasses both clinical findings and L&T reporting. Developed using a large-scale, multi-site, longitudinal dataset of 3.1 million studies (comprising 6 million images from 806k patients) from Mayo Clinic, MAIRA-X was evaluated on three holdout datasets and the public MIMIC-CXR dataset, where it significantly improved AI-generated reports over the state of the art on lexical quality, clinical correctness, and L&T-related elements. A novel L&T-specific metrics framework was developed to assess accuracy in reporting attributes such as type, longitudinal change and placement. A first-of-its-kind retrospective user evaluation study was conducted with nine radiologists of varying experience, who blindly reviewed 600 studies from distinct subjects. The user study found comparable rates of critical errors (3.0% for original vs. 4.6% for AI-generated reports) and a similar rate of acceptable sentences (97.8% for original vs. 97.4% for AI-generated reports), marking a significant improvement over prior user studies with larger gaps and higher error rates. Our results suggest that MAIRA-X can effectively assist radiologists, particularly in high-volume clinical settings.
Problem

Research questions and friction points this paper is trying to address.

Develops AI model for chest X-ray report generation
Improves reporting of clinical findings and lines/tubes
Reduces radiologists' workload while maintaining diagnostic accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal AI model for chest X-ray report generation
Novel L&T-specific metrics framework for accuracy assessment
First-of-its-kind retrospective user evaluation with radiologists
🔎 Similar Papers
No similar papers found.
Harshita Sharma
Harshita Sharma
Senior Researcher at Microsoft
Computer visionMedical image analysisMachine learningBiomedical imagingMultimodal methods
M
Maxwell C. Reynolds
Mayo Clinic
V
Valentina Salvatelli
Microsoft
A
Anne-Marie G. Sykes
Mayo Clinic
K
Kelly K. Horst
Mayo Clinic
A
Anton Schwaighofer
Microsoft
Maximilian Ilse
Maximilian Ilse
Senior Researcher @ Microsoft Research
medical imagingdeep learningmachine learning
O
Olesya Melnichenko
Microsoft
Sam Bond-Taylor
Sam Bond-Taylor
Senior Researcher at Microsoft Research
Deep LearningGenerative ModelsMedical Imaging
Fernando Pérez-García
Fernando Pérez-García
Microsoft Research - Biomedical Imaging
medical image computingmachine learning
V
Vamshi K. Mugu
Mayo Clinic
A
Alex Chan
Mayo Clinic
C
Ceylan Colak
Mayo Clinic
S
Shelby A. Swartz
Mayo Clinic
M
Motassem B. Nashawaty
Mayo Clinic
A
Austin J. Gonzalez
Mayo Clinic
H
Heather A. Ouellette
Mayo Clinic
S
Selnur B. Erdal
Mayo Clinic
B
Beth A. Schueler
Mayo Clinic
M
Maria T. Wetscherek
Microsoft
Noel Codella
Noel Codella
Principal Researcher @ Microsoft
Artificial IntelligenceMachine LearningComputer Vision
M
Mohit Jain
Microsoft
Shruthi Bannur
Shruthi Bannur
Microsoft Research
Machine LearningDeep LearningComputer VisionNatural Language Processing
Kenza Bouzid
Kenza Bouzid
Microsoft Research
Machine LearningComputer Vision
D
Daniel C. Castro
Microsoft