Detecting Non-Membership in LLM Training Data via Rank Correlations

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work presents the first systematic approach to non-membership detection in large language model (LLM) training data, a critical capability for copyright enforcement, compliance auditing, and user trust. The proposed method, PRISM, operates under gray-box access—requiring only model logits—and determines whether a given data point was excluded from training by computing the Spearman rank correlation coefficient between its normalized token log-probabilities under the target model and a reference model. Extensive experiments across multiple datasets demonstrate that PRISM reliably identifies non-members with zero false positives, effectively verifying the absence of specific data from an LLM’s training set. These results underscore PRISM’s robustness and practical utility in training data exclusion verification.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) are trained on increasingly vast and opaque text corpora, determining which data contributed to training has become essential for copyright enforcement, compliance auditing, and user trust. While prior work focuses on detecting whether a dataset was used in training (membership inference), the complementary problem -- verifying that a dataset was not used -- has received little attention. We address this gap by introducing PRISM, a test that detects dataset-level non-membership using only grey-box access to model logits. Our key insight is that two models that have not seen a dataset exhibit higher rank correlation in their normalized token log probabilities than when one model has been trained on that data. Using this observation, we construct a correlation-based test that detects non-membership. Empirically, PRISM reliably rules out membership in training data across all datasets tested while avoiding false positives, thus offering a framework for verifying that specific datasets were excluded from LLM training.
Problem

Research questions and friction points this paper is trying to address.

non-membership detection
large language models
training data verification
membership inference
data provenance
Innovation

Methods, ideas, or system contributions that make the work stand out.

non-membership detection
rank correlation
large language models
training data auditing
grey-box inference
🔎 Similar Papers
No similar papers found.