STAMP Your Content: Proving Dataset Membership via Watermarked Rephrasings

📅 2025-04-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of detecting “silent inclusion” (i.e., data contamination) of private content in large language model (LLM) pretraining corpora, this paper proposes the first lightweight, fine-tuning-free contamination detection framework based on verifiable semantic watermarking via paraphrasing. Methodologically, it generates semantically faithful (BERTScore/ROUGE-constrained), watermark-embedded public-private paraphrase pairs for target data and applies paired likelihood statistical tests (e.g., Wilcoxon signed-rank test) to identify statistically significant response disparities—enabling zero-shot training-set membership inference. Key contributions include: (1) high-sensitivity detection of contaminated instances appearing only once or constituting as little as 0.001% of the corpus; and (2) simultaneous preservation of semantic consistency and copyright traceability. The framework significantly outperforms existing dataset inference and contamination detection methods across four benchmarks. Empirical validation on academic abstracts and blog texts confirms accurate detection of such content within mainstream LLM training corpora.

Technology Category

Application Category

📝 Abstract
Given how large parts of publicly available text are crawled to pretrain large language models (LLMs), data creators increasingly worry about the inclusion of their proprietary data for model training without attribution or licensing. Their concerns are also shared by benchmark curators whose test-sets might be compromised. In this paper, we present STAMP, a framework for detecting dataset membership-i.e., determining the inclusion of a dataset in the pretraining corpora of LLMs. Given an original piece of content, our proposal involves first generating multiple rephrases, each embedding a watermark with a unique secret key. One version is to be released publicly, while others are to be kept private. Subsequently, creators can compare model likelihoods between public and private versions using paired statistical tests to prove membership. We show that our framework can successfully detect contamination across four benchmarks which appear only once in the training data and constitute less than 0.001% of the total tokens, outperforming several contamination detection and dataset inference baselines. We verify that STAMP preserves both the semantic meaning and the utility of the original data in comparing different models. We apply STAMP to two real-world scenarios to confirm the inclusion of paper abstracts and blog articles in the pretraining corpora.
Problem

Research questions and friction points this paper is trying to address.

Detecting dataset membership in LLM pretraining corpora
Proving content inclusion via watermarked rephrasings
Preserving semantic meaning while identifying data contamination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates watermarked rephrases for content
Compares model likelihoods for membership proof
Detects dataset inclusion with high sensitivity
🔎 Similar Papers
No similar papers found.