Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding

📅 2024-09-05
🏛️ arXiv.org
📈 Citations: 2
Influential: 1
📄 PDF
🤖 AI Summary
To address privacy leakage risks arising from membership inference (MI) attacks against pretraining data of large language models (LLMs), this paper proposes a robust MI method based on contrastive decoding. The core innovation lies in the first identification and modeling of subtle yet discriminative distributional shifts between member and non-member samples in contextual representations; a contrastive decoding paradigm is then designed to amplify these shifts, coupled with a probability-ratio-based membership scoring mechanism. By integrating distributional shift modeling with an adversarial text robustness evaluation framework, the method significantly improves generalization against semantic-preserving perturbations—including synonym substitution, token deletion, and paraphrasing. Evaluated on the WikiMIA benchmark, it achieves state-of-the-art performance, outperforming prior methods by a substantial margin in accuracy. This work establishes a novel paradigm for LLM data provenance and privacy protection.

Technology Category

Application Category

📝 Abstract
The training data in large language models is key to their success, but it also presents privacy and security risks, as it may contain sensitive information. Detecting pre-training data is crucial for mitigating these concerns. Existing methods typically analyze target text in isolation or solely with non-member contexts, overlooking potential insights from simultaneously considering both member and non-member contexts. While previous work suggested that member contexts provide little information due to the minor distributional shift they induce, our analysis reveals that these subtle shifts can be effectively leveraged when contrasted with non-member contexts. In this paper, we propose Con-ReCall, a novel approach that leverages the asymmetric distributional shifts induced by member and non-member contexts through contrastive decoding, amplifying subtle differences to enhance membership inference. Extensive empirical evaluations demonstrate that Con-ReCall achieves state-of-the-art performance on the WikiMIA benchmark and is robust against various text manipulation techniques.
Problem

Research questions and friction points this paper is trying to address.

Privacy
Security
Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Con-ReCall
Contrastive Decoding
Pretrained Data Identification
🔎 Similar Papers
No similar papers found.
C
Cheng Wang
National University of Singapore
Y
Yiwei Wang
University of California, Merced
Bryan Hooi
Bryan Hooi
National University of Singapore
Machine LearningNatural Language ProcessingGraphsTrustworthy AI
Yujun Cai
Yujun Cai
NTU → Meta → Lecturer(Assistant Professor) @UQ
Multi-Modal PerceptionVision-Language Models
N
Nanyun Peng
University of California, Los Angeles
K
Kai-Wei Chang
University of California, Los Angeles