🤖 AI Summary
Medical report generation models often suffer from clinical hallucinations, leading to factual inaccuracies that undermine diagnostic reliability. To address this, we propose HiMed-RL, a hierarchical reinforcement learning framework featuring a human-inspired dynamic reward adjustment mechanism that progressively optimizes linguistic fluency, conceptual accuracy, and diagnostic consistency. Our approach innovatively integrates expert-knowledge-guided concept alignment, token-level reward modeling, semantic-level LLM-based validation, and an adaptive reward scheduling strategy—enabling fine-grained clinical quality control. Evaluated on both in-domain and cross-domain benchmarks, the HiMed-3B model achieves state-of-the-art performance; notably, its cross-domain metrics surpass those of the second-best baseline by 12.1%, significantly enhancing the clinical credibility and practical utility of generated reports.
📝 Abstract
Automatic medical report generation can greatly reduce the workload of doctors, but it is often unreliable for real-world deployment. Current methods can write formally fluent sentences but may be factually flawed, introducing serious medical errors known as clinical hallucinations, which make them untrustworthy for diagnosis. To bridge this gap, we introduce HiMed-RL, a Hierarchical Medical Reward Learning Framework designed to explicitly prioritize clinical quality. HiMed-RL moves beyond simple text matching by deconstructing reward learning into three synergistic levels: it first ensures linguistic fluency at the token-level, then enforces factual grounding at the concept-level by aligning key medical terms with expert knowledge, and finally assesses high-level diagnostic consistency at the semantic-level using a specialized LLM verifier. This hierarchical reward is implemented via a Human-inspired Dynamic Reward Adjustment, a strategy which first teaches the model to learn basic facts before progressing to more complex diagnostic reasoning. Experimentally, HiMed-3B achieves state-of-the-art performance on both in-domain and out-of-domain benchmarks, particularly on the latter, with an improvement of 12.1% over the second-best baseline. Our work provides a robust paradigm for generating reports that not only improve fluency but clinical fine-grained quality.