🤖 AI Summary
The opaque training data of black-box commercial large language models (e.g., GPT-4) raises serious concerns regarding copyright infringement, author rights violations, and training data contamination. Method: We propose the first weight- and token-probability–agnostic text probing method driven by high surprisal—quantified as extreme negative log-probability—to elicit model memory responses. By generating maximally surprising inputs and combining zero-shot reconstruction evaluation with automated localization verification, our approach efficiently identifies implicitly memorized training fragments. Contribution/Results: Experiments on GPT-4 successfully recover numerous verifiable original training excerpts. Our method enables scalable, internal-access–free data provenance analysis, delivering practical value in copyright auditing, contamination detection, and model interpretability—establishing a novel, deployable paradigm for large-model data溯源 without requiring model internals.
📝 Abstract
High-quality training data has proven crucial for developing performant large language models (LLMs). However, commercial LLM providers disclose few, if any, details about the data used for training. This lack of transparency creates multiple challenges: it limits external oversight and inspection of LLMs for issues such as copyright infringement, it undermines the agency of data authors, and it hinders scientific research on critical issues such as data contamination and data selection. How can we recover what training data is known to LLMs? In this work, we demonstrate a new method to identify training data known to proprietary LLMs like GPT-4 without requiring any access to model weights or token probabilities, by using information-guided probes. Our work builds on a key observation: text passages with high surprisal are good search material for memorization probes. By evaluating a model's ability to successfully reconstruct high-surprisal tokens in text, we can identify a surprising number of texts memorized by LLMs.