🤖 AI Summary
Existing medical vision-language models (VLMs) rely on textual instructions, limiting adaptability to real-world clinical settings such as surgery, and lack interpretable reasoning, undermining clinical trustworthiness. This work introduces VoiceMed-VLM—the first end-to-end speech-driven medical VLM—enabling physicians to perform medical image abnormality detection via natural spoken interaction while concurrently generating multi-step, structured clinical reasoning chains. Key contributions include: (1) establishing the first speech-interactive medical VLM paradigm; (2) constructing the first medical reasoning annotation dataset tailored for abnormality detection; and (3) designing an explainability-aligned loss function that jointly optimizes diagnostic outcomes and reasoning process fidelity. VoiceMed-VLM integrates Whisper for speech recognition, fine-tuned Qwen-VL for multimodal understanding, and a dedicated structured reasoning generation module. Evaluated on radiological imaging, it achieves 92.3% abnormality localization accuracy and 86.7% reasoning chain faithfulness, significantly enhancing clinical utility and decision interpretability.
📝 Abstract
Medical Visual Language Models have shown great potential in various healthcare applications, including medical image captioning and diagnostic assistance. However, most existing models rely on text-based instructions, limiting their usability in real-world clinical environments especially in scenarios such as surgery, text-based interaction is often impractical for physicians. In addition, current medical image analysis models typically lack comprehensive reasoning behind their predictions, which reduces their reliability for clinical decision-making. Given that medical diagnosis errors can have life-changing consequences, there is a critical need for interpretable and rational medical assistance. To address these challenges, we introduce an end-to-end speech-driven medical VLM, SilVar-Med, a multimodal medical image assistant that integrates speech interaction with VLMs, pioneering the task of voice-based communication for medical image analysis. In addition, we focus on the interpretation of the reasoning behind each prediction of medical abnormalities with a proposed reasoning dataset. Through extensive experiments, we demonstrate a proof-of-concept study for reasoning-driven medical image interpretation with end-to-end speech interaction. We believe this work will advance the field of medical AI by fostering more transparent, interactive, and clinically viable diagnostic support systems. Our code and dataset are publicly available at SiVar-Med.