🤖 AI Summary
This work addresses privacy leakage risks in large vision-language models (LVLMs) by proposing a novel image membership inference attack to determine whether a target image was included in the model’s training set. The core method introduces, for the first time, a distortion-inspired membership inference paradigm: it exploits differential cross-modal embedding sensitivity of LVLMs to distorted perturbations applied to member versus non-member images, enabling a lightweight and efficient attack framework. The approach operates in both white-box (leveraging visual embedding similarity) and black-box (utilizing text-output embedding similarity) settings, ensuring broad practical applicability. Extensive evaluation across multiple state-of-the-art LVLMs and benchmark datasets demonstrates strong performance: up to 89.7% accuracy in the white-box setting and 76.3% in the black-box setting—substantially outperforming existing baselines. This work advances the understanding of LVLM privacy vulnerabilities and provides a scalable, modality-agnostic inference technique grounded in perceptual distortion analysis.
📝 Abstract
Large vision-language models (LVLMs) have demonstrated outstanding performance in many downstream tasks. However, LVLMs are trained on large-scale datasets, which can pose privacy risks if training images contain sensitive information. Therefore, it is important to detect whether an image is used to train the LVLM. Recent studies have investigated membership inference attacks (MIAs) against LVLMs, including detecting image-text pairs and single-modality content. In this work, we focus on detecting whether a target image is used to train the target LVLM. We design simple yet effective Image Corruption-Inspired Membership Inference Attacks (ICIMIA) against LLVLMs, which are inspired by LVLM's different sensitivity to image corruption for member and non-member images. We first perform an MIA method under the white-box setting, where we can obtain the embeddings of the image through the vision part of the target LVLM. The attacks are based on the embedding similarity between the image and its corrupted version. We further explore a more practical scenario where we have no knowledge about target LVLMs and we can only query the target LVLMs with an image and a question. We then conduct the attack by utilizing the output text embeddings' similarity. Experiments on existing datasets validate the effectiveness of our proposed attack methods under those two different settings.