Concept-Enhanced Multimodal RAG: Towards Interpretable and Accurate Radiology Report Generation

📅 2026-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the persistent challenges of limited interpretability and factual hallucinations in existing medical vision-language models for radiology report generation, where interpretability and accuracy are often treated as mutually exclusive. To bridge this gap, the authors propose CEMRAG, a unified framework that uniquely integrates concept-level interpretability with multimodal retrieval-augmented generation (RAG). By disentangling visual representations into clinically interpretable concepts and incorporating multimodal RAG to supply semantically rich contextual prompts, CEMRAG simultaneously enhances both factual accuracy and model transparency. Extensive experiments on the MIMIC-CXR and IU X-Ray datasets demonstrate that CEMRAG consistently outperforms conventional RAG and concept-only baselines across diverse vision-language architectures, achieving significant improvements in both clinical correctness and standard NLP metrics, thereby offering a modular pathway toward trustworthy clinical AI.

Technology Category

Application Category

📝 Abstract
Radiology Report Generation (RRG) through Vision-Language Models (VLMs) promises to reduce documentation burden, improve reporting consistency, and accelerate clinical workflows. However, their clinical adoption remains limited by the lack of interpretability and the tendency to hallucinate findings misaligned with imaging evidence. Existing research typically treats interpretability and accuracy as separate objectives, with concept-based explainability techniques focusing primarily on transparency, while Retrieval-Augmented Generation (RAG) methods targeting factual grounding through external retrieval. We present Concept-Enhanced Multimodal RAG (CEMRAG), a unified framework that decomposes visual representations into interpretable clinical concepts and integrates them with multimodal RAG. This approach exploits enriched contextual prompts for RRG, improving both interpretability and factual accuracy. Experiments on MIMIC-CXR and IU X-Ray across multiple VLM architectures, training regimes, and retrieval configurations demonstrate consistent improvements over both conventional RAG and concept-only baselines on clinical accuracy metrics and standard NLP measures. These results challenge the assumed trade-off between interpretability and performance, showing that transparent visual concepts can enhance rather than compromise diagnostic accuracy in medical VLMs. Our modular design decomposes interpretability into visual transparency and structured language model conditioning, providing a principled pathway toward clinically trustworthy AI-assisted radiology.
Problem

Research questions and friction points this paper is trying to address.

Radiology Report Generation
Interpretability
Hallucination
Vision-Language Models
Factual Accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Concept-Enhanced Multimodal RAG
Interpretable AI
Radiology Report Generation
Vision-Language Models
Retrieval-Augmented Generation
🔎 Similar Papers
No similar papers found.