🤖 AI Summary
This work addresses the misalignment between generated radiology reports and medical images—commonly caused by inaccurate visual grounding and factual inconsistencies—by proposing CURE, a novel framework that integrates error-aware dynamic curriculum learning with multi-task training. Without requiring additional data, CURE jointly optimizes phrase-level visual grounding, localized report generation, and anatomy-guided report synthesis. The approach significantly enhances vision–language alignment, achieving a +0.37 improvement in grounding accuracy (measured by IoU), a +0.188 gain in report quality (CXRFEScore), and an 18.6% reduction in hallucinations. These advances collectively strengthen anatomical consistency and clinical reliability in automated radiology reporting.
📝 Abstract
Medical vision-language models can automate the generation of radiology reports but struggle with accurate visual grounding and factual consistency. Existing models often misalign textual findings with visual evidence, leading to unreliable or weakly grounded predictions. We present CURE, an error-aware curriculum learning framework that improves grounding and report quality without any additional data. CURE fine-tunes a multimodal instructional model on phrase grounding, grounded report generation, and anatomy-grounded report generation using public datasets. The method dynamically adjusts sampling based on model performance, emphasizing harder samples to improve spatial and textual alignment. CURE improves grounding accuracy by +0.37 IoU, boosts report quality by +0.188 CXRFEScore, and reduces hallucinations by 18.6%. CURE is a data-efficient framework that enhances both grounding accuracy and report reliability. Code is available at https://github.com/PabloMessina/CURE and model weights at https://huggingface.co/pamessina/medgemma-4b-it-cure