Towards Label-Only Membership Inference Attack against Pre-trained Large Language Models

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates membership inference attacks (MIAs) against pre-trained large language models (LLMs) under the label-only setting—where attackers observe only generated tokens and lack access to output logits. Existing label-only MIAs suffer severe performance degradation on pre-trained LLMs, despite remaining effective on fine-tuned models, suggesting intrinsic robustness mechanisms in pre-training. To address this gap, we conduct the first systematic analysis of the root causes of such failure and propose PETAL: the first label-only MIA framework that approximates token-level conditional probabilities via semantic similarity between generated tokens and candidate continuations, enabling accurate perplexity estimation. Evaluated on WikiMIA and MIMIR benchmarks across five mainstream open-source LLMs, PETAL significantly outperforms prior label-only methods—achieving up to a 37% improvement in mean average precision (mAP)—and matches the performance of state-of-the-art logit-based attacks.

Technology Category

Application Category

📝 Abstract
Membership Inference Attacks (MIAs) aim to predict whether a data sample belongs to the model's training set or not. Although prior research has extensively explored MIAs in Large Language Models (LLMs), they typically require accessing to complete output logits (ie, extit{logits-based attacks}), which are usually not available in practice. In this paper, we study the vulnerability of pre-trained LLMs to MIAs in the extit{label-only setting}, where the adversary can only access generated tokens (text). We first reveal that existing label-only MIAs have minor effects in attacking pre-trained LLMs, although they are highly effective in inferring fine-tuning datasets used for personalized LLMs. We find that their failure stems from two main reasons, including better generalization and overly coarse perturbation. Specifically, due to the extensive pre-training corpora and exposing each sample only a few times, LLMs exhibit minimal robustness differences between members and non-members. This makes token-level perturbations too coarse to capture such differences. To alleviate these problems, we propose extbf{PETAL}: a label-only membership inference attack based on extbf{PE}r- extbf{T}oken sem extbf{A}ntic simi extbf{L}arity. Specifically, PETAL leverages token-level semantic similarity to approximate output probabilities and subsequently calculate the perplexity. It finally exposes membership based on the common assumption that members are `better' memorized and have smaller perplexity. We conduct extensive experiments on the WikiMIA benchmark and the more challenging MIMIR benchmark. Empirically, our PETAL performs better than the extensions of existing label-only attacks against personalized LLMs and even on par with other advanced logit-based attacks across all metrics on five prevalent open-source LLMs.
Problem

Research questions and friction points this paper is trying to address.

Label-only membership inference attack
Vulnerability of pre-trained LLMs
Token-level semantic similarity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Label-only membership inference attack
Token-level semantic similarity
Perplexity-based membership exposure
🔎 Similar Papers
No similar papers found.
Y
Yu He
The State Key Laboratory of Blockchain and Data Security, Zhejiang University
Boheng Li
Boheng Li
Nanyang Technological University
AI SecurityWatermarkingBackdoor AttackCopyright Protection
L
Liu Liu
The State Key Laboratory of Blockchain and Data Security, Zhejiang University
Zhongjie Ba
Zhongjie Ba
Zhejiang University
IoT security
W
Wei Dong
College of Computing and Data Science, Nanyang Technological University
Y
Yiming Li
The State Key Laboratory of Blockchain and Data Security, Zhejiang University, College of Computing and Data Science, Nanyang Technological University
Zhan Qin
Zhan Qin
Researcher, Zhejiang University
Data Security and PrivacyAI Security
Kui Ren
Kui Ren
Professor and Dean of Computer Science, Zhejiang University, ACM/IEEE Fellow
Data Security & PrivacyAI SecurityIoT & Vehicular Security
C
Chun Chen
The State Key Laboratory of Blockchain and Data Security, Zhejiang University