As If We've Met Before: LLMs Exhibit Certainty in Recognizing Seen Files

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of detecting copyright-infringing content in large language model (LLM) training data. We propose COPYCHECK, an unsupervised framework that leverages— for the first time—the systematic difference in generation uncertainty between *seen* (in-training) and *unseen* content, termed “overconfidence bias,” to perform membership inference without manual hyperparameter tuning or labeled data. Methodologically, COPYCHECK integrates document segmentation, uncertainty signal modeling, and uncertainty-guided clustering to enable fine-grained identification of copyrighted material. Evaluated on LLaMA-7B and LLaMA2-7B, it achieves average balanced accuracies of 90.1% and 91.6%, respectively—surpassing the best baseline by over 90% (up to 93.8%). Strong generalization is further validated on GPT-J-6B. COPYCHECK significantly enhances transparency and auditability of LLM training data with respect to copyright compliance.

Technology Category

Application Category

📝 Abstract
The remarkable language ability of Large Language Models (LLMs) stems from extensive training on vast datasets, often including copyrighted material, which raises serious concerns about unauthorized use. While Membership Inference Attacks (MIAs) offer potential solutions for detecting such violations, existing approaches face critical limitations and challenges due to LLMs'inherent overconfidence, limited access to ground truth training data, and reliance on empirically determined thresholds. We present COPYCHECK, a novel framework that leverages uncertainty signals to detect whether copyrighted content was used in LLM training sets. Our method turns LLM overconfidence from a limitation into an asset by capturing uncertainty patterns that reliably distinguish between ``seen"(training data) and ``unseen"(non-training data) content. COPYCHECK further implements a two-fold strategy: (1) strategic segmentation of files into smaller snippets to reduce dependence on large-scale training data, and (2) uncertainty-guided unsupervised clustering to eliminate the need for empirically tuned thresholds. Experiment results show that COPYCHECK achieves an average balanced accuracy of 90.1% on LLaMA 7b and 91.6% on LLaMA2 7b in detecting seen files. Compared to the SOTA baseline, COPYCHECK achieves over 90% relative improvement, reaching up to 93.8% balanced accuracy. It further exhibits strong generalizability across architectures, maintaining high performance on GPT-J 6B. This work presents the first application of uncertainty for copyright detection in LLMs, offering practical tools for training data transparency.
Problem

Research questions and friction points this paper is trying to address.

Detecting unauthorized copyrighted content in LLM training datasets
Overcoming limitations of existing membership inference attack methods
Transforming LLM overconfidence into reliable copyright detection signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages uncertainty signals to detect copyright use
Segments files into snippets to reduce data dependence
Uses unsupervised clustering to eliminate threshold tuning
🔎 Similar Papers
No similar papers found.
Haodong Li
Haodong Li
UC San Diego. Prev: HKUST, ZJU, Tencent.
3DVGenerative ModelsAgents
J
Jingqi Zhang
National University of Singapore
X
Xiao Cheng
Macquarie University
Peihua Mai
Peihua Mai
National University of Singapore
privacy computing
H
Haoyu Wang
Huazhong University of Science and Technology
Y
Yang Pan
National University of Singapore