Reason Like a Radiologist: Chain-of-Thought and Reinforcement Learning for Verifiable Report Generation

📅 2025-04-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current radiology report generation models lack structured reasoning capabilities, hindering precise association of visual findings with anatomical locations—thereby limiting clinical trustworthiness and interpretability. To address this, we propose BoxMed-RL, a unified training framework that innovatively integrates chain-of-thought (CoT) supervision with spatially verifiable bounding-box alignment reinforcement learning (RL), explicitly modeling the radiologist’s “detection–localization–diagnosis” workflow. Built upon a large vision-language model, BoxMed-RL incorporates medical concept pretraining, anatomy-aware CoT supervision, spatial-constrained RL optimization, and lightweight adapter-based fine-tuning. On public benchmarks, it achieves average improvements of 7% in METEOR and ROUGE-L scores, and a 5% gain in LLM-based clinical evaluation scores. The method significantly enhances report accuracy, anatomical consistency, and clinical interpretability.

Technology Category

Application Category

📝 Abstract
Radiology report generation is critical for efficiency but current models lack the structured reasoning of experts, hindering clinical trust and explainability by failing to link visual findings to precise anatomical locations. This paper introduces BoxMed-RL, a groundbreaking unified training framework for generating spatially verifiable and explainable radiology reports. Built on a large vision-language model, BoxMed-RL revolutionizes report generation through two integrated phases: (1) In the Pretraining Phase, we refine the model via medical concept learning, using Chain-of-Thought supervision to internalize the radiologist-like workflow, followed by spatially verifiable reinforcement, which applies reinforcement learning to align medical findings with bounding boxes. (2) In the Downstream Adapter Phase, we freeze the pretrained weights and train a downstream adapter to ensure fluent and clinically credible reports. This framework precisely mimics radiologists' workflow, compelling the model to connect high-level medical concepts with definitive anatomical evidence. Extensive experiments on public datasets demonstrate that BoxMed-RL achieves an average 7% improvement in both METEOR and ROUGE-L metrics compared to state-of-the-art methods. An average 5% improvement in large language model-based metrics further underscores BoxMed-RL's robustness in generating high-quality radiology reports.
Problem

Research questions and friction points this paper is trying to address.

Enhances radiology report accuracy via structured reasoning
Links visual findings to precise anatomical locations
Improves clinical trust with verifiable report generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chain-of-Thought supervision for radiologist-like workflow
Reinforcement learning aligns findings with bounding boxes
Downstream adapter ensures fluent and credible reports
P
Peiyuan Jing
Imperial College London, London, UK
K
Kinhei Lee
Imperial College London, London, UK
Zhenxuan Zhang
Zhenxuan Zhang
Georgia Institute of Technology
Huichi Zhou
Huichi Zhou
University College London
AI4Science
Zhengqing Yuan
Zhengqing Yuan
PhD student, University of Notre Dame
NLPDeeplearningCV
Zhifan Gao
Zhifan Gao
Sun Yat-sen University
Medical Image AnalysisComputer VisionMachine Learning
L
Lei Zhu
HKUST(GZ), China
G
G. Papanastasiou
Athena Research Centre, Athens
Yingying Fang
Yingying Fang
Imperial College London
G
Guang Yang
Imperial College London, London, UK