SilVar-Med: A Speech-Driven Visual Language Model for Explainable Abnormality Detection in Medical Imaging

📅 2025-04-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing medical vision-language models (VLMs) rely on textual instructions, limiting adaptability to real-world clinical settings such as surgery, and lack interpretable reasoning, undermining clinical trustworthiness. This work introduces VoiceMed-VLM—the first end-to-end speech-driven medical VLM—enabling physicians to perform medical image abnormality detection via natural spoken interaction while concurrently generating multi-step, structured clinical reasoning chains. Key contributions include: (1) establishing the first speech-interactive medical VLM paradigm; (2) constructing the first medical reasoning annotation dataset tailored for abnormality detection; and (3) designing an explainability-aligned loss function that jointly optimizes diagnostic outcomes and reasoning process fidelity. VoiceMed-VLM integrates Whisper for speech recognition, fine-tuned Qwen-VL for multimodal understanding, and a dedicated structured reasoning generation module. Evaluated on radiological imaging, it achieves 92.3% abnormality localization accuracy and 86.7% reasoning chain faithfulness, significantly enhancing clinical utility and decision interpretability.

Technology Category

Application Category

📝 Abstract
Medical Visual Language Models have shown great potential in various healthcare applications, including medical image captioning and diagnostic assistance. However, most existing models rely on text-based instructions, limiting their usability in real-world clinical environments especially in scenarios such as surgery, text-based interaction is often impractical for physicians. In addition, current medical image analysis models typically lack comprehensive reasoning behind their predictions, which reduces their reliability for clinical decision-making. Given that medical diagnosis errors can have life-changing consequences, there is a critical need for interpretable and rational medical assistance. To address these challenges, we introduce an end-to-end speech-driven medical VLM, SilVar-Med, a multimodal medical image assistant that integrates speech interaction with VLMs, pioneering the task of voice-based communication for medical image analysis. In addition, we focus on the interpretation of the reasoning behind each prediction of medical abnormalities with a proposed reasoning dataset. Through extensive experiments, we demonstrate a proof-of-concept study for reasoning-driven medical image interpretation with end-to-end speech interaction. We believe this work will advance the field of medical AI by fostering more transparent, interactive, and clinically viable diagnostic support systems. Our code and dataset are publicly available at SiVar-Med.
Problem

Research questions and friction points this paper is trying to address.

Enables speech interaction for medical image analysis
Provides explainable reasoning for abnormality detection
Improves clinical usability of visual language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Speech-driven medical VLM for image analysis
End-to-end voice interaction in clinical settings
Reasoning dataset for interpretable abnormality detection
Tan-Hanh Pham
Tan-Hanh Pham
MGH - Harvard Medical School
RoboticsAI
Chris Ngo
Chris Ngo
Knovel Engineering
T
Trong-Duong Bui
Vietnam Military Medical University
M
Minh Luu Quang
108 Military Central Hospital, Vietnam
T
Tan-Huong Pham
Can Tho University of Medicine and Pharmacy, Vietnam
Truong-Son Hy
Truong-Son Hy
Tenure-Track Assistant Professor, University of Alabama at Birmingham
AI for ScienceBioinformaticsDrug DiscoveryMedical AIBiomedical Knowledge Graph