🤖 AI Summary
This work addresses the challenge of detecting copyright-infringing content in large language model (LLM) training data. We propose COPYCHECK, an unsupervised framework that leverages— for the first time—the systematic difference in generation uncertainty between *seen* (in-training) and *unseen* content, termed “overconfidence bias,” to perform membership inference without manual hyperparameter tuning or labeled data. Methodologically, COPYCHECK integrates document segmentation, uncertainty signal modeling, and uncertainty-guided clustering to enable fine-grained identification of copyrighted material. Evaluated on LLaMA-7B and LLaMA2-7B, it achieves average balanced accuracies of 90.1% and 91.6%, respectively—surpassing the best baseline by over 90% (up to 93.8%). Strong generalization is further validated on GPT-J-6B. COPYCHECK significantly enhances transparency and auditability of LLM training data with respect to copyright compliance.
📝 Abstract
The remarkable language ability of Large Language Models (LLMs) stems from extensive training on vast datasets, often including copyrighted material, which raises serious concerns about unauthorized use. While Membership Inference Attacks (MIAs) offer potential solutions for detecting such violations, existing approaches face critical limitations and challenges due to LLMs'inherent overconfidence, limited access to ground truth training data, and reliance on empirically determined thresholds. We present COPYCHECK, a novel framework that leverages uncertainty signals to detect whether copyrighted content was used in LLM training sets. Our method turns LLM overconfidence from a limitation into an asset by capturing uncertainty patterns that reliably distinguish between ``seen"(training data) and ``unseen"(non-training data) content. COPYCHECK further implements a two-fold strategy: (1) strategic segmentation of files into smaller snippets to reduce dependence on large-scale training data, and (2) uncertainty-guided unsupervised clustering to eliminate the need for empirically tuned thresholds. Experiment results show that COPYCHECK achieves an average balanced accuracy of 90.1% on LLaMA 7b and 91.6% on LLaMA2 7b in detecting seen files. Compared to the SOTA baseline, COPYCHECK achieves over 90% relative improvement, reaching up to 93.8% balanced accuracy. It further exhibits strong generalizability across architectures, maintaining high performance on GPT-J 6B. This work presents the first application of uncertainty for copyright detection in LLMs, offering practical tools for training data transparency.