🤖 AI Summary
Evaluating long-text radiology report generation remains challenging: conventional NLG metrics ignore clinical correctness; LLM-based metrics suffer from poor generalizability; and clinical accuracy metrics are biased by class imbalance. This paper introduces CRG Score—the first clinical distribution-aware evaluation metric for radiology report generation. Its contributions are threefold: (1) it focuses exclusively on clinically annotated abnormalities in reference reports, thereby mitigating class-imbalance bias; (2) it incorporates a distribution-adaptive penalty mechanism compatible with both binary and structured labels (e.g., abnormality type and anatomical location); and (3) it employs a decoupled architecture to integrate arbitrary LLMs for feature extraction, synergistically combining clinical knowledge constraints, distribution-weighted scoring, and structured matching. Experiments demonstrate that CRG Score significantly improves evaluation fairness and generalizability. It is differentiable, clinically aligned, and serves as a reliable reward signal for reinforcement learning–driven optimization of radiology report generation.
📝 Abstract
Evaluating long-context radiology report generation is challenging. NLG metrics fail to capture clinical correctness, while LLM-based metrics often lack generalizability. Clinical accuracy metrics are more relevant but are sensitive to class imbalance, frequently favoring trivial predictions. We propose the CRG Score, a distribution-aware and adaptable metric that evaluates only clinically relevant abnormalities explicitly described in reference reports. CRG supports both binary and structured labels (e.g., type, location) and can be paired with any LLM for feature extraction. By balancing penalties based on label distribution, it enables fairer, more robust evaluation and serves as a clinically aligned reward function.