Enhancing Medical Large Vision-Language Models via Alignment Distillation

📅 2025-12-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Med-LVLMs suffer from hallucinations in clinical tasks due to insufficient visual representation learning and misalignment in vision–language attention. To address this, we propose MEDALIGN, a lightweight alignment distillation framework that— for the first time—introduces spatially aware visual alignment loss and attention-aware distillation loss, enabling targeted transfer of diagnosis-relevant visual knowledge from CLIP to Med-LVLMs. Our method jointly models visual token-level similarity and performs diagnosis-region-guided attention distillation, requiring no additional annotations or architectural modifications. Evaluated on medical report generation and visual question answering, MEDALIGN significantly improves generation accuracy, image-evidence consistency, and clinical plausibility, while enhancing output interpretability. This work establishes an efficient, annotation-free alignment paradigm for trustworthy medical multimodal large language models.

Technology Category

Application Category

📝 Abstract
Medical Large Vision-Language Models (Med-LVLMs) have shown promising results in clinical applications, but often suffer from hallucinated outputs due to misaligned visual understanding. In this work, we identify two fundamental limitations contributing to this issue: insufficient visual representation learning and poor visual attention alignment. To address these problems, we propose MEDALIGN, a simple, lightweight alignment distillation framework that transfers visual alignment knowledge from a domain-specific Contrastive Language-Image Pre-training (CLIP) model to Med-LVLMs. MEDALIGN introduces two distillation losses: a spatial-aware visual alignment loss based on visual token-level similarity structures, and an attention-aware distillation loss that guides attention toward diagnostically relevant regions. Extensive experiments on medical report generation and medical visual question answering (VQA) benchmarks show that MEDALIGN consistently improves both performance and interpretability, yielding more visually grounded outputs.
Problem

Research questions and friction points this paper is trying to address.

Addresses hallucinated outputs in Medical Large Vision-Language Models
Improves visual representation learning and attention alignment
Proposes a lightweight distillation framework for medical tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Alignment distillation framework transfers visual knowledge
Spatial-aware visual alignment loss based on token similarity
Attention-aware distillation loss guides focus to diagnostic regions
🔎 Similar Papers
No similar papers found.