🤖 AI Summary
To address two key challenges in automated radiology report generation (RRG)—complex visual information extraction and difficulty in evaluating report quality—this paper proposes a multi-scale, multimodal tokenization framework that jointly integrates a vision encoder and a text tokenizer, synergistically leveraging large language models. We introduce GREEN-RedLlama-guided differentiable direct preference optimization (DPO), enabling fine-grained semantic alignment between generated reports and expert annotations. By combining contrastive learning with DPO, our method significantly improves cross-modal semantic consistency and clinical expression accuracy, especially under few-shot settings. Extensive experiments on four benchmark CT image–report datasets demonstrate state-of-the-art performance, outperforming existing methods across all metrics. The results validate the framework’s strong generalizability and clinical applicability.
📝 Abstract
Automated radiology report generation (RRG) aims to produce detailed textual reports from clinical imaging, such as computed tomography (CT) scans, to improve the accuracy and efficiency of diagnosis and provision of management advice. RRG is complicated by two key challenges: (1) inherent complexity in extracting relevant information from imaging data under resource constraints, and (2) difficulty in objectively evaluating discrepancies between model-generated and expert-written reports. To address these challenges, we propose $μ^2$LLM, a $underline{ extbf{mu}}$ltiscale $underline{ extbf{mu}}$ltimodal large language models for RRG tasks. The novel $μ^2$Tokenizer, as an intermediate layer, integrates multi-modal features from the multiscale visual tokenizer and the text tokenizer, then enhances report generation quality through direct preference optimization (DPO), guided by GREEN-RedLlama. Experimental results on four large CT image-report medical datasetdemonstrate that our method outperforms existing approaches, highlighting the potential of our fine-tuned $μ^2$LLMs on limited data for RRG tasks.