π€ AI Summary
This work proposes a medical-domain-specific multimodal foundation model to address key limitations of existing multimodal large language models in healthcare, including insufficient domain coverage, challenges in cross-modal alignment, and lack of interpretable visual grounding. The approach integrates heterogeneous visual encoders with a medical-language backbone through cross-modal alignment, followed by multi-task instruction tuning encompassing visual question answering (VQA), report generation, retrieval, and localization. A novel reinforcement learning mechanism is introduced, combining bounding box GIoU optimization with factual consistency verification to enhance spatial reasoning and disease localization. Experimental results demonstrate significant improvements: a 13.7% gain in VQA accuracy, a 6.9% increase in text-based QA performance, markedly improved clinical fidelity in generated reports, and a 40.4% relative improvement in bounding box IoU over the baseline, underscoring the modelβs superior cross-modal generalization capabilities.
π Abstract
Multimodal large language models (MLLMs) have rapidly advanced, yet their adoption in medicine remains limited by gaps in domain coverage, modality alignment, and grounded reasoning. In this work, we introduce MedMO, a medical foundation model built upon a generalized MLLM architecture and trained exclusively on large-scale, domain-specific data. MedMO follows a multi-stage training recipe: (i) cross-modal pretraining to align heterogeneous visual encoders with a medical language backbone; (ii) instruction tuning on multi-task supervision that spans captioning, VQA, report generation, retrieval, and grounded disease localization with bounding boxes; and (iii) reinforcement learning with verifiable rewards that combine factuality checks with a box-level GIoU reward to strengthen spatial grounding and step-by-step reasoning in complex clinical scenarios. MedMO consistently outperforms strong open-source medical MLLMs across multiple modalities and tasks. On VQA benchmarks, MedMO achieves an average accuracy improvement of +13.7% over the baseline and performs within 1.9% of the SOTA Fleming-VL. For text-based QA, it attains +6.9% over the baseline and +14.5% over Fleming-VL. In medical report generation, MedMO delivers significant gains in both semantic and clinical accuracy. Moreover, it exhibits strong grounding capability, achieving an IoU improvement of +40.4 over the baseline and +37.0% over Fleming-VL, underscoring its robust spatial reasoning and localization performance. Evaluations across radiology, ophthalmology, and pathology-microscopy confirm MedMO's broad cross-modality generalization. We release two versions of MedMO: 4B and 8B. Project is available at https://genmilab.github.io/MedMO-Page