🤖 AI Summary
Existing approaches to radiology report generation often suffer from limited clinical utility and poor data efficiency. To address these shortcomings, this work proposes a diagnostic diversity sampling strategy to enhance training data quality and introduces Diagnostic Token-weighted Policy Optimization (DiTPO), a novel method that leverages diagnostic F1 scores to guide report generation. DiTPO employs a hybrid weighting mechanism combining rule-based and gradient-based signals to prioritize clinically critical content during policy optimization. Evaluated on benchmarks such as MIMIC-CXR, the proposed approach achieves an F1 score of 0.516 using only 20% of the training samples, substantially outperforming current state-of-the-art methods. This demonstrates a significant simultaneous improvement in both sample efficiency and clinical accuracy.
📝 Abstract
Radiologists highly desire fully automated AI for radiology report generation (R2G), yet existing approaches fall short in clinical utility. Reinforcement learning (RL) holds potential to address these shortcomings, but its adoption in this task remains underexplored. In this paper, we revisit RL in terms of data efficiency and optimization effectiveness for R2G tasks. First, we explore the impact of data quantity and quality on the performance of RL in medical contexts, revealing that data quality plays a more critical role than quantity. To this end, we propose a diagnostic diversity-based data sampling strategy that enables comparable performance with fewer samples. Second, we observe that the majority of tokens in radiology reports are template-like and diagnostically uninformative, whereas the low frequency of clinically critical tokens heightens the risk of being overlooked during optimization. To tackle this, we introduce Diagnostic Token-weighted Policy Optimization (DiTPO), which directly optimizes for clinical accuracy by using a diagnostic F1 score as the reward signal. Unlike standard RL approaches that treat all tokens equally, DiTPO explicitly models the varying importance of different tokens through rule- or gradient-based mechanisms to prioritize clinically relevant content. Extensive experiments on the MIMIC-CXR, IU-Xray, and CheXpert Plus datasets demonstrate that our framework achieves state-of-the-art (SOTA) performance while requiring substantially fewer training samples in RL. Notably, on MIMIC-CXR, our framework attains an F1 score of 0.516 using only 20% of the RL training samples.