Enhancing the Safety of Medical Vision-Language Models by Synthetic Demonstrations

📅 2025-06-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical vision-language models (Med-VLMs) face security risks from adversarial queries (e.g., insurance fraud instructions) and suffer from over-defensiveness—erroneously rejecting benign clinical requests. To address this, we propose a推理-time, dual-modal safety enhancement method: constructing a joint vision-text safeguard mechanism using synthetically generated clinical examples, and introducing a novel few-shot hybrid prompting strategy that integrates both real and synthetic demonstrations to achieve cross-modal safety alignment. Systematic evaluation across nine medical imaging modalities demonstrates that our approach significantly improves rejection rates for harmful queries while degrading diagnostic report generation performance by less than 1.2%, thereby effectively mitigating over-defensiveness. Our core contribution is the first data-efficient, fine-tuning-free, inference-time defense against multimodal jailbreak attacks—achieving robust security without compromising clinical utility.

Technology Category

Application Category

📝 Abstract
Generative medical vision-language models~(Med-VLMs) are primarily designed to generate complex textual information~(e.g., diagnostic reports) from multimodal inputs including vision modality~(e.g., medical images) and language modality~(e.g., clinical queries). However, their security vulnerabilities remain underexplored. Med-VLMs should be capable of rejecting harmful queries, such as extit{Provide detailed instructions for using this CT scan for insurance fraud}. At the same time, addressing security concerns introduces the risk of over-defense, where safety-enhancing mechanisms may degrade general performance, causing Med-VLMs to reject benign clinical queries. In this paper, we propose a novel inference-time defense strategy to mitigate harmful queries, enabling defense against visual and textual jailbreak attacks. Using diverse medical imaging datasets collected from nine modalities, we demonstrate that our defense strategy based on synthetic clinical demonstrations enhances model safety without significantly compromising performance. Additionally, we find that increasing the demonstration budget alleviates the over-defense issue. We then introduce a mixed demonstration strategy as a trade-off solution for balancing security and performance under few-shot demonstration budget constraints.
Problem

Research questions and friction points this paper is trying to address.

Addressing security vulnerabilities in medical vision-language models
Preventing over-defense while rejecting harmful medical queries
Balancing model safety and performance with synthetic demonstrations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic clinical demonstrations enhance safety
Defense against visual and textual jailbreak attacks
Mixed demonstration strategy balances security and performance
🔎 Similar Papers
No similar papers found.