LEAD: Layer-wise Expert-aligned Decoding for Faithful Radiology Report Generation

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of hallucinations—clinically irrelevant or image-inconsistent content—in radiology report generation by large vision-language models. To mitigate this issue, the authors propose a layer-wise expert-aligned decoding mechanism that integrates multiple expert modules within each decoder layer to extract pathological features from medical images. A gating network dynamically fuses these expert representations to continuously align the language generation process with the visual input, without relying on external knowledge sources. Experimental results across several public datasets demonstrate that the proposed approach significantly improves clinical accuracy, effectively suppresses hallucinatory content, and maintains high-quality natural language generation.

Technology Category

Application Category

📝 Abstract
Radiology Report Generation (RRG) aims to produce accurate and coherent diagnostics from medical images. Although large vision language models (LVLM) improve report fluency and accuracy, they exhibit hallucinations, generating plausible yet image-ungrounded pathological details. Existing methods primarily rely on external knowledge guidance to facilitate the alignment between generated text and visual information. However, these approaches often ignore the inherent decoding priors and vision-language alignment biases in pretrained models and lack robustness due to reliance on constructed guidance. In this paper, we propose Layer-wise Expert-aligned Decoding (LEAD), a novel method to inherently modify the LVLM decoding trajectory. A multiple experts module is designed for extracting distinct pathological features which are integrated into each decoder layer via a gating mechanism. This layer-wise architecture enables the LLM to consult expert features at every inference step via a learned gating function, thereby dynamically rectifying decoding biases and steering the generation toward factual consistency. Experiments conducted on multiple public datasets demonstrate that the LEAD method yields effective improvements in clinical accuracy metrics and mitigates hallucinations while preserving high generation quality.
Problem

Research questions and friction points this paper is trying to address.

Radiology Report Generation
Hallucination
Vision-Language Alignment
Factual Consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layer-wise Decoding
Expert-aligned
Hallucination Mitigation
Vision-Language Alignment
Radiology Report Generation
🔎 Similar Papers
No similar papers found.
R
Ruixiao Yang
Beijing Institute of Technology
Yuanhe Tian
Yuanhe Tian
University of Washington
Computational LinguisticsNatural Language Processing
X
Xu Yang
Beijing Institute of Technology
H
Huiqi Li
Beijing Institute of Technology
Yan Song
Yan Song
USTC
Natural language processingComputational linguisticsMachine learning