Standardizing Longitudinal Radiology Report Evaluation via Large Language Model Annotation

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of accurately evaluating radiology report generation models due to the absence of efficient and consistent tools for automatically annotating longitudinal information, such as disease progression. To this end, the authors propose a novel two-stage pipeline that leverages large language models (e.g., Qwen2.5-32B)—first identifying sentences containing longitudinal cues and then extracting explicit disease progression states—thereby replacing traditional rule-based or closed-vocabulary approaches. Validated on 500 manually annotated reports, the method successfully labels 95,169 MIMIC-CXR reports, establishing the first standardized benchmark for longitudinal evaluation. This advancement yields significant improvements, increasing F1 scores by 11.3% for longitudinal information detection and by 5.3% for disease progression tracking, while substantially enhancing model generalizability and scalability.

Technology Category

Application Category

📝 Abstract
Longitudinal information in radiology reports refers to the sequential tracking of findings across multiple examinations over time, which is crucial for monitoring disease progression and guiding clinical decisions. Many recent automated radiology report generation methods are designed to capture longitudinal information; however, validating their performance is challenging. There is no proper tool to consistently label temporal changes in both ground-truth and model-generated texts for meaningful comparisons. Existing annotation methods are typically labor-intensive, relying on the use of manual lexicons and rules. Complex rules are closed-source, domain specific and hard to adapt, whereas overly simple ones tend to miss essential specialised information. Large language models (LLMs) offer a promising annotation alternative, as they are capable of capturing nuanced linguistic patterns and semantic similarities without extensive manual intervention. They also adapt well to new contexts. In this study, we therefore propose an LLM-based pipeline to automatically annotate longitudinal information in radiology reports. The pipeline first identifies sentences containing relevant information and then extracts the progression of diseases. We evaluate and compare five mainstream LLMs on these two tasks using 500 manually annotated reports. Considering both efficiency and performance, Qwen2.5-32B was subsequently selected and used to annotate another 95,169 reports from the public MIMIC-CXR dataset. Our Qwen2.5-32B-annotated dataset provided us with a standardized benchmark for evaluating report generation models. Using this new benchmark, we assessed seven state-of-the-art report generation models. Our LLM-based annotation method outperforms existing annotation solutions, achieving 11.3\% and 5.3\% higher F1-scores for longitudinal information detection and disease tracking, respectively.
Problem

Research questions and friction points this paper is trying to address.

longitudinal information
radiology report evaluation
annotation standardization
temporal change labeling
automated benchmarking
Innovation

Methods, ideas, or system contributions that make the work stand out.

longitudinal radiology reports
large language models
automated annotation
disease progression tracking
standardized benchmark
🔎 Similar Papers
No similar papers found.
X
Xinyi Wang
School of Computer Science, The University of Nottingham, Nottingham, NG7 2RD, United Kingdom
G
G. Figueredo
School of Medicine, The University of Nottingham, Nottingham, NG7 2RD, United Kingdom
Ruizhe Li
Ruizhe Li
Research Fellow, Computer Science, University of Nottingham
Medical Image AnalysisDeep Learning
Xin Chen
Xin Chen
Associate Professor, University of Nottingham
Medical Image AnalysisComputer VisionMachine Learning