🤖 AI Summary
Med-LVLMs suffer from hallucinations in clinical tasks due to insufficient visual representation learning and misalignment in vision–language attention. To address this, we propose MEDALIGN, a lightweight alignment distillation framework that— for the first time—introduces spatially aware visual alignment loss and attention-aware distillation loss, enabling targeted transfer of diagnosis-relevant visual knowledge from CLIP to Med-LVLMs. Our method jointly models visual token-level similarity and performs diagnosis-region-guided attention distillation, requiring no additional annotations or architectural modifications. Evaluated on medical report generation and visual question answering, MEDALIGN significantly improves generation accuracy, image-evidence consistency, and clinical plausibility, while enhancing output interpretability. This work establishes an efficient, annotation-free alignment paradigm for trustworthy medical multimodal large language models.
📝 Abstract
Medical Large Vision-Language Models (Med-LVLMs) have shown promising results in clinical applications, but often suffer from hallucinated outputs due to misaligned visual understanding. In this work, we identify two fundamental limitations contributing to this issue: insufficient visual representation learning and poor visual attention alignment. To address these problems, we propose MEDALIGN, a simple, lightweight alignment distillation framework that transfers visual alignment knowledge from a domain-specific Contrastive Language-Image Pre-training (CLIP) model to Med-LVLMs. MEDALIGN introduces two distillation losses: a spatial-aware visual alignment loss based on visual token-level similarity structures, and an attention-aware distillation loss that guides attention toward diagnostically relevant regions. Extensive experiments on medical report generation and medical visual question answering (VQA) benchmarks show that MEDALIGN consistently improves both performance and interpretability, yielding more visually grounded outputs.