🤖 AI Summary
Existing automated radiology report evaluation methods lack clinical grounding, interpretability, and fine-grained analysis, often relying on black-box models or coarse overall scores—limiting their integration into real-world clinical workflows. This paper introduces the first explainable, fine-grained assessment framework targeting six clinically defined error types, generating both granular sub-scores and natural-language justifications for each. Methodologically, it innovatively incorporates dynamic sub-score weighting and majority-guided advantage scaling, synergistically enhanced by group-relative optimization, F1-driven weight adaptation, and consensus gradient regularization—collectively enabling adaptive discrimination of complex, subtle errors. Evaluated on the ReXVal benchmark, our approach comprehensively outperforms existing offline metrics, matching GPT-4’s assessment accuracy while offering superior transparency, strong clinical alignment, and significantly lower deployment overhead.
📝 Abstract
Evaluating automatically generated radiology reports remains a fundamental challenge due to the lack of clinically grounded, interpretable, and fine-grained metrics. Existing methods either produce coarse overall scores or rely on opaque black-box models, limiting their usefulness in real-world clinical workflows. We introduce RadReason, a novel evaluation framework for radiology reports that not only outputs fine-grained sub-scores across six clinically defined error types, but also produces human-readable justifications that explain the rationale behind each score. Our method builds on Group Relative Policy Optimization and incorporates two key innovations: (1) Sub-score Dynamic Weighting, which adaptively prioritizes clinically challenging error types based on live F1 statistics; and (2) Majority-Guided Advantage Scaling, which adjusts policy gradient updates based on prompt difficulty derived from sub-score agreement. Together, these components enable more stable optimization and better alignment with expert clinical judgment. Experiments on the ReXVal benchmark show that RadReason surpasses all prior offline metrics and achieves parity with GPT-4-based evaluations, while remaining explainable, cost-efficient, and suitable for clinical deployment. Code will be released upon publication.