LLaDA-MedV: Exploring Large Language Diffusion Models for Biomedical Image Understanding

📅 2025-08-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior to this work, masked diffusion architectures had not been explored for biomedical vision-language modeling. Method: We propose LLaDA-MedV—the first large language-diffusion model tailored to biomedical multimodal understanding—introducing the language diffusion paradigm to biomedical image interpretation. Built upon a masked diffusion architecture, it integrates visual instruction tuning, refined weight initialization, and optimized sampling step scheduling to enable explicit control over response length, thereby enhancing long-text generation quality and inference stability. Contribution/Results: On open-domain visual dialogue, LLaDA-MedV outperforms LLaVA-Med by 7.86%. It achieves new state-of-the-art accuracies of 84.93% on VQA-RAD, 92.31% on SLAKE, and 95.15% on PathVQA. This work establishes a novel paradigm and a strong baseline for biomedical multimodal reasoning.

Technology Category

Application Category

📝 Abstract
Autoregressive models (ARMs) have long dominated the landscape of biomedical vision-language models (VLMs). Recently, masked diffusion models such as LLaDA have emerged as promising alternatives, yet their application in the biomedical domain remains largely underexplored. To bridge this gap, we introduce extbf{LLaDA-MedV}, the first large language diffusion model tailored for biomedical image understanding through vision instruction tuning. LLaDA-MedV achieves relative performance gains of 7.855% over LLaVA-Med and 1.867% over LLaDA-V in the open-ended biomedical visual conversation task, and sets new state-of-the-art accuracy on the closed-form subset of three VQA benchmarks: 84.93% on VQA-RAD, 92.31% on SLAKE, and 95.15% on PathVQA. Furthermore, a detailed comparison with LLaVA-Med suggests that LLaDA-MedV is capable of generating reasonably longer responses by explicitly controlling response length, which can lead to more informative outputs. We also conduct an in-depth analysis of both the training and inference stages, highlighting the critical roles of initialization weight selection, fine-tuning strategies, and the interplay between sampling steps and response repetition. The code and model weight is released at https://github.com/LLM-VLM-GSL/LLaDA-MedV.
Problem

Research questions and friction points this paper is trying to address.

Exploring diffusion models for biomedical image understanding
Bridging gap in biomedical vision-language model applications
Improving accuracy in biomedical visual question answering
Innovation

Methods, ideas, or system contributions that make the work stand out.

First large language diffusion model for biomedical images
Vision instruction tuning enhances biomedical understanding
Explicit response length control improves output quality
🔎 Similar Papers