🤖 AI Summary
This work addresses the limitations of existing methods for automatic evaluation of medical imaging reports, which often fail to capture the structured reasoning inherent in radiological diagnosis and lack clinical interpretability and fidelity in inference. To overcome these challenges, the authors propose a multi-agent streaming reasoning framework that, for the first time, incorporates a collaborative mechanism to simulate the workflow of radiologists. The approach decomposes evaluation into interpretable steps—namely, criterion definition, evidence extraction, alignment, and consistency scoring—and introduces a new benchmark encompassing multimodal imaging data and semantic perturbations. Extensive experiments across five datasets demonstrate that the proposed method significantly outperforms current approaches in terms of clinical consistency, semantic fidelity, and robustness against perturbations.
📝 Abstract
Evaluating the clinical correctness and reasoning fidelity of automatically generated medical imaging reports remains a critical yet unresolved challenge. Existing evaluation methods often fail to capture the structured diagnostic logic that underlies radiological interpretation, resulting in unreliable judgments and limited clinical relevance. We introduce AgentsEval, a multi-agent stream reasoning framework that emulates the collaborative diagnostic workflow of radiologists. By dividing the evaluation process into interpretable steps including criteria definition, evidence extraction, alignment, and consistency scoring, AgentsEval provides explicit reasoning traces and structured clinical feedback. We also construct a multi-domain perturbation-based benchmark covering five medical report datasets with diverse imaging modalities and controlled semantic variations. Experimental results demonstrate that AgentsEval delivers clinically aligned, semantically faithful, and interpretable evaluations that remain robust under paraphrastic, semantic, and stylistic perturbations. This framework represents a step toward transparent and clinically grounded assessment of medical report generation systems, fostering trustworthy integration of large language models into clinical practice.