Doctor Sun: A Bilingual Multimodal Large Language Model for Biomedical AI

📅 2025-07-30
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current medical multimodal large language models (MLLMs) suffer from limitations inherent in generic LLM architectures and weak vision-language alignment, hindering their ability to comprehend complex medical concepts and model fine-grained cross-modal relationships—especially under scarce medical data. To address this, we propose SunMed-VL, the first open-source bilingual medical MLLM. It employs a two-stage training paradigm: (1) contrastive learning for robust visual-linguistic feature alignment, followed by (2) end-to-end fine-tuning on a large-scale bilingual medical instruction dataset. SunMed-VL tightly integrates a domain-specialized medical LLM with a pre-trained vision encoder and is accompanied by the newly released SunMed-VL bilingual multimodal dataset. Experiments demonstrate state-of-the-art performance across medical visual reasoning, cross-modal retrieval, and radiology report generation, significantly outperforming existing medical MLLMs on multiple expert-curated benchmarks. The model weights, training code, and dataset are fully open-sourced.

Technology Category

Application Category

📝 Abstract
Large multimodal models (LMMs) have demonstrated significant potential in providing innovative solutions for various biomedical tasks, including pathology analysis, radiology report generation, and biomedical assistance. However, the existing multimodal biomedical AI is typically based on foundation LLMs, thus hindering the understanding of intricate medical concepts with limited medical training data. Moreover, recent LLaVA-induced medical LMMs struggle to effectively capture the intricate relationship between the texts and the images. Therefore, we introduce Doctor Sun, a large multimodal generative model specialized in medicine, developed to encode, integrate, and interpret diverse biomedical data modalities such as text and images. In particular, Doctor Sun integrates a pre-trained vision encoder with a medical LLM and conducts two-stage training on various medical datasets, focusing on feature alignment and instruction tuning. Moreover, we release SunMed-VL, a wide-range bilingual medical multimodal dataset, along with all associated models, code, and resources, to freely support the advancement of biomedical multimodal research.
Problem

Research questions and friction points this paper is trying to address.

Addresses limited medical training data in multimodal biomedical AI
Improves text-image relationship understanding in medical LMMs
Integrates and interprets diverse biomedical data modalities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates pre-trained vision encoder with medical LLM
Uses two-stage training for feature alignment and tuning
Releases bilingual medical multimodal dataset and resources
🔎 Similar Papers
No similar papers found.
Dong Xue
Dong Xue
Associate Professor of Automation, East China University of Science and Technology
multi-agent systemscomplex networkdistributed control and optimizationopinion dynamics in social networkspower systems
Z
Ziyao Shao
Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai 200237, China
Z
Zhaoyang Duan
Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai 200237, China
F
Fangzhou Liu
Research Institute of Intelligent Control and Systems, Harbin Institute of Technology, Harbin 150001, China
B
Bing Li
Department of Emergency Medicine, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou 310000, China
Z
Zhongheng Zhang
Provincial Key Laboratory of Precise Diagnosis Treatment of Abdominal Infection, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou 310000, China; School of Medicine, Shaoxing University, Shaoxing 311800, China