Guiding Medical Vision-Language Models with Explicit Visual Prompts: Framework Design and Comprehensive Exploration of Prompt Variations

📅 2025-01-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing medical vision-language models (VLMs) often overlook fine-grained anatomical features—such as lesions—resulting in clinically insufficient outputs. To address this, we propose MedVP, the first framework to incorporate explicit visual prompting into medical VLMs. MedVP employs a medical-entity-driven prompt generation and adaptation fine-tuning paradigm to steer model attention toward critical abnormal regions. It supports three prompt modalities: learnable prompts, mask-based prompts, and segmentation-map prompts, integrated with visual-prompt-guided instruction tuning and multimodal alignment optimization. Evaluated on multiple medical visual question answering benchmarks, MedVP significantly outperforms state-of-the-art models—including LLaVA-Med—demonstrating substantial improvements in fine-grained pathological understanding and clinical answer accuracy. Our approach establishes a novel paradigm for enhancing both interpretability and clinical adaptability of medical VLMs.

Technology Category

Application Category

📝 Abstract
With the recent advancements in vision-language models (VLMs) driven by large language models (LLMs), many researchers have focused on models that comprised of an image encoder, an image-to-language projection layer, and a text decoder architectures, leading to the emergence of works like LLava-Med. However, these works primarily operate at the whole-image level, aligning general information from 2D medical images without attending to finer details. As a result, these models often provide irrelevant or non-clinically valuable information while missing critical details. Medical vision-language tasks differ significantly from general images, particularly in their focus on fine-grained details, while excluding irrelevant content. General domain VLMs tend to prioritize global information due to their design, which compresses the entire image into a multi-token representation that is passed into the LLM decoder. Therefore, current VLMs all lack the capability to restrict their attention to particular areas. To address this critical issue in the medical domain, we introduce MedVP, an visual prompt generation and fine-tuning framework, which involves extract medical entities, generate visual prompts, and adapt datasets for visual prompt guided fine-tuning. To the best of our knowledge, this is the first work to explicitly introduce visual prompt into medical VLMs, and we successfully outperform recent state-of-the-art large models across multiple medical VQA datasets. Extensive experiments are conducted to analyze the impact of different visual prompt forms and how they contribute to performance improvement. The results demonstrate both the effectiveness and clinical significance of our approach
Problem

Research questions and friction points this paper is trying to address.

Visual Language Models
Medical Imaging
Attention Mechanism
Innovation

Methods, ideas, or system contributions that make the work stand out.

MedVP
Prompt Engineering
Medical Image Analysis
🔎 Similar Papers
No similar papers found.