🤖 AI Summary
This study addresses the performance gap between AI systems and radiologists in chest X-ray report generation, with emphasis on pathological description accuracy and modeling the “finding-to-explanation” (L&T) logical chain. To this end, we propose MAIRA-X, a novel multimodal AI model that jointly processes imaging and textual inputs for end-to-end report generation—the first of its kind—and introduce the first clinical evaluation framework explicitly designed for L&T structural fidelity. Trained and validated on 3.1 million multicenter longitudinal cases, MAIRA-X achieves a critical error rate of only 4.6% and an acceptable-sentence rate of 97.4% in real-world retrospective studies, matching radiologist-level performance. Results demonstrate significant improvements in clinical consistency and generation efficiency, establishing a deployable technical pathway for intelligent diagnostic assistance in high-workload clinical settings.
📝 Abstract
AI-assisted report generation offers the opportunity to reduce radiologists' workload stemming from expanded screening guidelines, complex cases and workforce shortages, while maintaining diagnostic accuracy. In addition to describing pathological findings in chest X-ray reports, interpreting lines and tubes (L&T) is demanding and repetitive for radiologists, especially with high patient volumes. We introduce MAIRA-X, a clinically evaluated multimodal AI model for longitudinal chest X-ray (CXR) report generation, that encompasses both clinical findings and L&T reporting. Developed using a large-scale, multi-site, longitudinal dataset of 3.1 million studies (comprising 6 million images from 806k patients) from Mayo Clinic, MAIRA-X was evaluated on three holdout datasets and the public MIMIC-CXR dataset, where it significantly improved AI-generated reports over the state of the art on lexical quality, clinical correctness, and L&T-related elements. A novel L&T-specific metrics framework was developed to assess accuracy in reporting attributes such as type, longitudinal change and placement. A first-of-its-kind retrospective user evaluation study was conducted with nine radiologists of varying experience, who blindly reviewed 600 studies from distinct subjects. The user study found comparable rates of critical errors (3.0% for original vs. 4.6% for AI-generated reports) and a similar rate of acceptable sentences (97.8% for original vs. 97.4% for AI-generated reports), marking a significant improvement over prior user studies with larger gaps and higher error rates. Our results suggest that MAIRA-X can effectively assist radiologists, particularly in high-volume clinical settings.