🤖 AI Summary
This work investigates membership inference attacks (MIAs) against pre-trained large language models (LLMs) under the label-only setting—where attackers observe only generated tokens and lack access to output logits. Existing label-only MIAs suffer severe performance degradation on pre-trained LLMs, despite remaining effective on fine-tuned models, suggesting intrinsic robustness mechanisms in pre-training. To address this gap, we conduct the first systematic analysis of the root causes of such failure and propose PETAL: the first label-only MIA framework that approximates token-level conditional probabilities via semantic similarity between generated tokens and candidate continuations, enabling accurate perplexity estimation. Evaluated on WikiMIA and MIMIR benchmarks across five mainstream open-source LLMs, PETAL significantly outperforms prior label-only methods—achieving up to a 37% improvement in mean average precision (mAP)—and matches the performance of state-of-the-art logit-based attacks.
📝 Abstract
Membership Inference Attacks (MIAs) aim to predict whether a data sample belongs to the model's training set or not. Although prior research has extensively explored MIAs in Large Language Models (LLMs), they typically require accessing to complete output logits (ie, extit{logits-based attacks}), which are usually not available in practice. In this paper, we study the vulnerability of pre-trained LLMs to MIAs in the extit{label-only setting}, where the adversary can only access generated tokens (text). We first reveal that existing label-only MIAs have minor effects in attacking pre-trained LLMs, although they are highly effective in inferring fine-tuning datasets used for personalized LLMs. We find that their failure stems from two main reasons, including better generalization and overly coarse perturbation. Specifically, due to the extensive pre-training corpora and exposing each sample only a few times, LLMs exhibit minimal robustness differences between members and non-members. This makes token-level perturbations too coarse to capture such differences. To alleviate these problems, we propose extbf{PETAL}: a label-only membership inference attack based on extbf{PE}r- extbf{T}oken sem extbf{A}ntic simi extbf{L}arity. Specifically, PETAL leverages token-level semantic similarity to approximate output probabilities and subsequently calculate the perplexity. It finally exposes membership based on the common assumption that members are `better' memorized and have smaller perplexity. We conduct extensive experiments on the WikiMIA benchmark and the more challenging MIMIR benchmark. Empirically, our PETAL performs better than the extensions of existing label-only attacks against personalized LLMs and even on par with other advanced logit-based attacks across all metrics on five prevalent open-source LLMs.