$μ^2$Tokenizer: Differentiable Multi-Scale Multi-Modal Tokenizer for Radiology Report Generation

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address two key challenges in automated radiology report generation (RRG)—complex visual information extraction and difficulty in evaluating report quality—this paper proposes a multi-scale, multimodal tokenization framework that jointly integrates a vision encoder and a text tokenizer, synergistically leveraging large language models. We introduce GREEN-RedLlama-guided differentiable direct preference optimization (DPO), enabling fine-grained semantic alignment between generated reports and expert annotations. By combining contrastive learning with DPO, our method significantly improves cross-modal semantic consistency and clinical expression accuracy, especially under few-shot settings. Extensive experiments on four benchmark CT image–report datasets demonstrate state-of-the-art performance, outperforming existing methods across all metrics. The results validate the framework’s strong generalizability and clinical applicability.

Technology Category

Application Category

📝 Abstract
Automated radiology report generation (RRG) aims to produce detailed textual reports from clinical imaging, such as computed tomography (CT) scans, to improve the accuracy and efficiency of diagnosis and provision of management advice. RRG is complicated by two key challenges: (1) inherent complexity in extracting relevant information from imaging data under resource constraints, and (2) difficulty in objectively evaluating discrepancies between model-generated and expert-written reports. To address these challenges, we propose $μ^2$LLM, a $underline{ extbf{mu}}$ltiscale $underline{ extbf{mu}}$ltimodal large language models for RRG tasks. The novel $μ^2$Tokenizer, as an intermediate layer, integrates multi-modal features from the multiscale visual tokenizer and the text tokenizer, then enhances report generation quality through direct preference optimization (DPO), guided by GREEN-RedLlama. Experimental results on four large CT image-report medical datasetdemonstrate that our method outperforms existing approaches, highlighting the potential of our fine-tuned $μ^2$LLMs on limited data for RRG tasks.
Problem

Research questions and friction points this paper is trying to address.

Extracting relevant information from imaging data efficiently
Evaluating discrepancies between model-generated and expert reports
Improving radiology report generation accuracy and quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentiable multi-scale multi-modal tokenizer
Direct preference optimization for quality enhancement
Fine-tuned multiscale multimodal LLMs for RRG
S
Siyou Li
School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
P
Pengyao Qin
School of Engineering, College of Engineering and Physical Sciences, University of Birmingham, Birmingham, UK
H
Huanan Wu
Guangdong University of Technology, Guangdong, China
Dong Nie
Dong Nie
unc
Computational NeuroScienceMachine LearningLarge Models
A
Arun J. Thirunavukarasu
Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
Juntao Yu
Juntao Yu
Queen Mary University of London
Natural Language ProcessingArtificial Intelligence
L
Le Zhang
William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University London, London, UK; School of Engineering, College of Engineering and Physical Sciences, University of Birmingham, Birmingham, UK