Membership Inference Attacks Against Vision-Language Models

📅 2025-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies, for the first time, the membership inference attack (MIA) vulnerability in instruction-tuned visual language models (VLMs), exposing risks of sensitive information leakage from instruction datasets. We propose a novel temperature-sensitive MIA framework comprising four progressive attack variants, spanning white-box to highly constrained black-box settings. Our method innovatively integrates temperature-scan analysis, multi-granularity confidence modeling, instruction-level sample sensitivity evaluation, and cross-modal output distribution statistics. Evaluated on LLaVA, the framework achieves AUC > 0.8 using only five query samples—substantially outperforming conventional MIA approaches—and demonstrates severe privacy leakage at the instruction-data level. This study establishes the first reproducible, scalable evaluation paradigm for assessing privacy risks in VLMs, providing foundational guidance for data governance and privacy-preserving development of multimodal foundation models.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs), built on pre-trained vision encoders and large language models (LLMs), have shown exceptional multi-modal understanding and dialog capabilities, positioning them as catalysts for the next technological revolution. However, while most VLM research focuses on enhancing multi-modal interaction, the risks of data misuse and leakage have been largely unexplored. This prompts the need for a comprehensive investigation of such risks in VLMs. In this paper, we conduct the first analysis of misuse and leakage detection in VLMs through the lens of membership inference attack (MIA). In specific, we focus on the instruction tuning data of VLMs, which is more likely to contain sensitive or unauthorized information. To address the limitation of existing MIA methods, we introduce a novel approach that infers membership based on a set of samples and their sensitivity to temperature, a unique parameter in VLMs. Based on this, we propose four membership inference methods, each tailored to different levels of background knowledge, ultimately arriving at the most challenging scenario. Our comprehensive evaluations show that these methods can accurately determine membership status, e.g., achieving an AUC greater than 0.8 targeting a small set consisting of only 5 samples on LLaVA.
Problem

Research questions and friction points this paper is trying to address.

Visual Language Models
Membership Inference Attacks
Data Misuse and Leakage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual Language Model
Membership Inference Attack
Sensitivity Analysis
🔎 Similar Papers
No similar papers found.
Yuke Hu
Yuke Hu
Zhejiang University
Data PrivacyTrustworthy LLMDifferential PrivacyMachine Unlearning
Z
Zheng Li
Shandong University
Z
Zhihao Liu
The State Key Laboratory of Blockchain and Data Security, Zhejiang University
Y
Yang Zhang
CISPA Helmholtz Center for Information Security
Zhan Qin
Zhan Qin
Researcher, Zhejiang University
Data Security and PrivacyAI Security
Kui Ren
Kui Ren
Professor and Dean of Computer Science, Zhejiang University, ACM/IEEE Fellow
Data Security & PrivacyAI SecurityIoT & Vehicular Security
C
Chun Chen
The State Key Laboratory of Blockchain and Data Security, Zhejiang University