Beyond N-grams: A Hierarchical Reward Learning Framework for Clinically-Aware Medical Report Generation

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical report generation models often suffer from clinical hallucinations, leading to factual inaccuracies that undermine diagnostic reliability. To address this, we propose HiMed-RL, a hierarchical reinforcement learning framework featuring a human-inspired dynamic reward adjustment mechanism that progressively optimizes linguistic fluency, conceptual accuracy, and diagnostic consistency. Our approach innovatively integrates expert-knowledge-guided concept alignment, token-level reward modeling, semantic-level LLM-based validation, and an adaptive reward scheduling strategy—enabling fine-grained clinical quality control. Evaluated on both in-domain and cross-domain benchmarks, the HiMed-3B model achieves state-of-the-art performance; notably, its cross-domain metrics surpass those of the second-best baseline by 12.1%, significantly enhancing the clinical credibility and practical utility of generated reports.

Technology Category

Application Category

📝 Abstract
Automatic medical report generation can greatly reduce the workload of doctors, but it is often unreliable for real-world deployment. Current methods can write formally fluent sentences but may be factually flawed, introducing serious medical errors known as clinical hallucinations, which make them untrustworthy for diagnosis. To bridge this gap, we introduce HiMed-RL, a Hierarchical Medical Reward Learning Framework designed to explicitly prioritize clinical quality. HiMed-RL moves beyond simple text matching by deconstructing reward learning into three synergistic levels: it first ensures linguistic fluency at the token-level, then enforces factual grounding at the concept-level by aligning key medical terms with expert knowledge, and finally assesses high-level diagnostic consistency at the semantic-level using a specialized LLM verifier. This hierarchical reward is implemented via a Human-inspired Dynamic Reward Adjustment, a strategy which first teaches the model to learn basic facts before progressing to more complex diagnostic reasoning. Experimentally, HiMed-3B achieves state-of-the-art performance on both in-domain and out-of-domain benchmarks, particularly on the latter, with an improvement of 12.1% over the second-best baseline. Our work provides a robust paradigm for generating reports that not only improve fluency but clinical fine-grained quality.
Problem

Research questions and friction points this paper is trying to address.

Addresses clinical hallucinations in medical report generation
Ensures factual grounding of key medical terms
Assesses diagnostic consistency using hierarchical reward learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical reward learning with token, concept, semantic levels
Human-inspired dynamic adjustment from basic facts to complex reasoning
Specialized LLM verifier for high-level diagnostic consistency
🔎 Similar Papers
No similar papers found.
Y
Yuan Wang
Zhejiang University
S
Shujian Gao
Fudan University
Jiaxiang Liu
Jiaxiang Liu
Zhejiang University
Multimodal FusionMedical Image Analysis
Songtao Jiang
Songtao Jiang
Zhejiang University
Vision-Language ModelsAI for Bioinfomatics and Medical
H
Haoxiang Xia
Zhejiang University
X
Xiaotian Zhang
Zhejiang University
Z
Zhaolu Kang
Peking University
Y
Yemin Wang
Zhejiang University
Zuozhu Liu
Zuozhu Liu
Assistant Professor, Zhejiang University/University of Illinois Urbana-Champaign
deep learningvision-language modelsmedical AI