Phrase-grounded Fact-checking for Automatically Generated Chest X-ray Reports

๐Ÿ“… 2025-09-20
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Factuality errors and hallucinations severely hinder the clinical reliability of automated chest X-ray report generation. Method: We propose a fine-grained, localization-aware fact-checking framework. Its core innovation is a synthetically constructed dataset of authentic/fake โ€œfindingโ€“locationโ€ pairs, coupled with a Multi-label Cross-modal Contrastive Regression (MCCR) network that jointly verifies both the radiological finding and its anatomical location in each report sentence. Training data are derived from large-scale vision-language model (VLM)-generated reports, augmented via semantic and spatial perturbation strategies. Contribution/Results: Evaluated on multiple public X-ray datasets, our method achieves near-perfect alignment between automated factuality prediction and human-annotated error localization (Pearson correlation = 0.997), significantly outperforming existing baselines. It establishes a novel, interpretable, and verifiable paradigm for clinically trustworthy AI-assisted radiology reporting.

Technology Category

Application Category

๐Ÿ“ Abstract
With the emergence of large-scale vision language models (VLM), it is now possible to produce realistic-looking radiology reports for chest X-ray images. However, their clinical translation has been hampered by the factual errors and hallucinations in the produced descriptions during inference. In this paper, we present a novel phrase-grounded fact-checking model (FC model) that detects errors in findings and their indicated locations in automatically generated chest radiology reports. Specifically, we simulate the errors in reports through a large synthetic dataset derived by perturbing findings and their locations in ground truth reports to form real and fake findings-location pairs with images. A new multi-label cross-modal contrastive regression network is then trained on this dataset. We present results demonstrating the robustness of our method in terms of accuracy of finding veracity prediction and localization on multiple X-ray datasets. We also show its effectiveness for error detection in reports of SOTA report generators on multiple datasets achieving a concordance correlation coefficient of 0.997 with ground truth-based verification, thus pointing to its utility during clinical inference in radiology workflows.
Problem

Research questions and friction points this paper is trying to address.

Detects factual errors in AI-generated chest X-ray reports
Identifies incorrect findings and their anatomical locations
Verifies report accuracy using phrase-grounded fact-checking model
Innovation

Methods, ideas, or system contributions that make the work stand out.

Phrase-grounded fact-checking model detects radiology report errors
Multi-label cross-modal contrastive regression network trained
Synthetic dataset with perturbed findings simulates report errors
๐Ÿ”Ž Similar Papers
No similar papers found.
R
Razi Mahmood
Rensselaer Polytechnic Institute, NY, USA
D
Diego Machado-Reyes
Rensselaer Polytechnic Institute, NY, USA
J
Joy Wu
IBM Research, Almaden, CA, USA; Stanford University, CA, USA
P
Parisa Kaviani
Massachusetts General Hospital (MGH), Boston, USA
Ken C. L. Wong
Ken C. L. Wong
IBM Research
Medical image analysisDeep learning3D image segmentationImage classificationComputational physiology
N
Niharika D'Souza
IBM Research, Almaden, CA, USA
M
Mannudeep Kalra
Massachusetts General Hospital (MGH), Boston, USA
G
Ge Wang
Rensselaer Polytechnic Institute, NY, USA
Pingkun Yan
Pingkun Yan
P.K. Lashmet Chair Professor and Department Head of BME, Rensselaer Polytechnic Institute
Medical image computingAI/MLimage-guided intervention and surgical planning
Tanveer Syeda-Mahmood
Tanveer Syeda-Mahmood
IBM Almaden Research Center
Image and Video retrievalMedical ImagingComputer VisionMultimedia