AgentsEval: Clinically Faithful Evaluation of Medical Imaging Reports via Multi-Agent Reasoning

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing methods for automatic evaluation of medical imaging reports, which often fail to capture the structured reasoning inherent in radiological diagnosis and lack clinical interpretability and fidelity in inference. To overcome these challenges, the authors propose a multi-agent streaming reasoning framework that, for the first time, incorporates a collaborative mechanism to simulate the workflow of radiologists. The approach decomposes evaluation into interpretable steps—namely, criterion definition, evidence extraction, alignment, and consistency scoring—and introduces a new benchmark encompassing multimodal imaging data and semantic perturbations. Extensive experiments across five datasets demonstrate that the proposed method significantly outperforms current approaches in terms of clinical consistency, semantic fidelity, and robustness against perturbations.

Technology Category

Application Category

📝 Abstract
Evaluating the clinical correctness and reasoning fidelity of automatically generated medical imaging reports remains a critical yet unresolved challenge. Existing evaluation methods often fail to capture the structured diagnostic logic that underlies radiological interpretation, resulting in unreliable judgments and limited clinical relevance. We introduce AgentsEval, a multi-agent stream reasoning framework that emulates the collaborative diagnostic workflow of radiologists. By dividing the evaluation process into interpretable steps including criteria definition, evidence extraction, alignment, and consistency scoring, AgentsEval provides explicit reasoning traces and structured clinical feedback. We also construct a multi-domain perturbation-based benchmark covering five medical report datasets with diverse imaging modalities and controlled semantic variations. Experimental results demonstrate that AgentsEval delivers clinically aligned, semantically faithful, and interpretable evaluations that remain robust under paraphrastic, semantic, and stylistic perturbations. This framework represents a step toward transparent and clinically grounded assessment of medical report generation systems, fostering trustworthy integration of large language models into clinical practice.
Problem

Research questions and friction points this paper is trying to address.

medical imaging report evaluation
clinical correctness
reasoning fidelity
radiological interpretation
evaluation reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-agent reasoning
clinical evaluation
medical imaging reports
structured diagnostic logic
perturbation-based benchmark
🔎 Similar Papers
No similar papers found.
S
Suzhong Fu
FNii-Shenzhen, The Chinese University of Hong Kong (Shenzhen); School of Science and Engineering, The Chinese University of Hong Kong (Shenzhen)
J
Jingqi Dong
FNii-Shenzhen, The Chinese University of Hong Kong (Shenzhen); School of Science and Engineering, The Chinese University of Hong Kong (Shenzhen)
X
Xuan Ding
FNii-Shenzhen, The Chinese University of Hong Kong (Shenzhen); School of Science and Engineering, The Chinese University of Hong Kong (Shenzhen)
Rui Sun
Rui Sun
The Chinese University of HongKong, ShenZhen
Machine Learning
Y
Yiming Yang
FNii-Shenzhen, The Chinese University of Hong Kong (Shenzhen); School of Science and Engineering, The Chinese University of Hong Kong (Shenzhen)
Shuguang Cui
Shuguang Cui
Distinguished Presidential Chair Professor, School of Science and Engineering, CUHKSZ
AI+NetworkingWireless Communications
Zhen Li
Zhen Li
The Chinese University of Hong Kong
Computer VisionGenerative Models