🤖 AI Summary
Current radiology report generation models lack structured reasoning capabilities, hindering precise association of visual findings with anatomical locations—thereby limiting clinical trustworthiness and interpretability. To address this, we propose BoxMed-RL, a unified training framework that innovatively integrates chain-of-thought (CoT) supervision with spatially verifiable bounding-box alignment reinforcement learning (RL), explicitly modeling the radiologist’s “detection–localization–diagnosis” workflow. Built upon a large vision-language model, BoxMed-RL incorporates medical concept pretraining, anatomy-aware CoT supervision, spatial-constrained RL optimization, and lightweight adapter-based fine-tuning. On public benchmarks, it achieves average improvements of 7% in METEOR and ROUGE-L scores, and a 5% gain in LLM-based clinical evaluation scores. The method significantly enhances report accuracy, anatomical consistency, and clinical interpretability.
📝 Abstract
Radiology report generation is critical for efficiency but current models lack the structured reasoning of experts, hindering clinical trust and explainability by failing to link visual findings to precise anatomical locations. This paper introduces BoxMed-RL, a groundbreaking unified training framework for generating spatially verifiable and explainable radiology reports. Built on a large vision-language model, BoxMed-RL revolutionizes report generation through two integrated phases: (1) In the Pretraining Phase, we refine the model via medical concept learning, using Chain-of-Thought supervision to internalize the radiologist-like workflow, followed by spatially verifiable reinforcement, which applies reinforcement learning to align medical findings with bounding boxes. (2) In the Downstream Adapter Phase, we freeze the pretrained weights and train a downstream adapter to ensure fluent and clinically credible reports. This framework precisely mimics radiologists' workflow, compelling the model to connect high-level medical concepts with definitive anatomical evidence. Extensive experiments on public datasets demonstrate that BoxMed-RL achieves an average 7% improvement in both METEOR and ROUGE-L metrics compared to state-of-the-art methods. An average 5% improvement in large language model-based metrics further underscores BoxMed-RL's robustness in generating high-quality radiology reports.