Safeguarding Vision-Language Models: Mitigating Vulnerabilities to Gaussian Noise in Perturbation-based Attacks

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) are vulnerable to jailbreaking attacks under simple image perturbations such as Gaussian noise, primarily due to the lack of noise-robustness modeling in prevailing training paradigms. Method: This work identifies the absence of noise-aware training as a fundamental cause of VLM security vulnerabilities and introduces Robust-VLGuard—a novel multimodal safety dataset—alongside DiffPure-VLM, the first end-to-end defense framework leveraging diffusion-based reverse purification. It integrates noise-augmented fine-tuning, adversarial perturbation distribution transfer, and construction of multimodal aligned/misaligned data. Contribution/Results: Experiments demonstrate that DiffPure-VLM substantially reduces success rates across diverse optimization-based visual perturbation attacks. It maintains semantic understanding across varying noise intensities and achieves simultaneous improvements in both robustness and accuracy on standard benchmarks.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) extend the capabilities of Large Language Models (LLMs) by incorporating visual information, yet they remain vulnerable to jailbreak attacks, especially when processing noisy or corrupted images. Although existing VLMs adopt security measures during training to mitigate such attacks, vulnerabilities associated with noise-augmented visual inputs are overlooked. In this work, we identify that missing noise-augmented training causes critical security gaps: many VLMs are susceptible to even simple perturbations such as Gaussian noise. To address this challenge, we propose Robust-VLGuard, a multimodal safety dataset with aligned / misaligned image-text pairs, combined with noise-augmented fine-tuning that reduces attack success rates while preserving functionality of VLM. For stronger optimization-based visual perturbation attacks, we propose DiffPure-VLM, leveraging diffusion models to convert adversarial perturbations into Gaussian-like noise, which can be defended by VLMs with noise-augmented safety fine-tuning. Experimental results demonstrate that the distribution-shifting property of diffusion model aligns well with our fine-tuned VLMs, significantly mitigating adversarial perturbations across varying intensities. The dataset and code are available at https://github.com/JarvisUSTC/DiffPure-RobustVLM.
Problem

Research questions and friction points this paper is trying to address.

VLMs vulnerable to Gaussian noise attacks
Lack noise-augmented training for security
Need defense against adversarial visual perturbations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Noise-augmented fine-tuning for VLMs
Diffusion models for adversarial noise conversion
Multimodal safety dataset with aligned pairs
🔎 Similar Papers
No similar papers found.