Information-Guided Identification of Training Data Imprint in (Proprietary) Large Language Models

📅 2025-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The opaque training data of black-box commercial large language models (e.g., GPT-4) raises serious concerns regarding copyright infringement, author rights violations, and training data contamination. Method: We propose the first weight- and token-probability–agnostic text probing method driven by high surprisal—quantified as extreme negative log-probability—to elicit model memory responses. By generating maximally surprising inputs and combining zero-shot reconstruction evaluation with automated localization verification, our approach efficiently identifies implicitly memorized training fragments. Contribution/Results: Experiments on GPT-4 successfully recover numerous verifiable original training excerpts. Our method enables scalable, internal-access–free data provenance analysis, delivering practical value in copyright auditing, contamination detection, and model interpretability—establishing a novel, deployable paradigm for large-model data溯源 without requiring model internals.

Technology Category

Application Category

📝 Abstract
High-quality training data has proven crucial for developing performant large language models (LLMs). However, commercial LLM providers disclose few, if any, details about the data used for training. This lack of transparency creates multiple challenges: it limits external oversight and inspection of LLMs for issues such as copyright infringement, it undermines the agency of data authors, and it hinders scientific research on critical issues such as data contamination and data selection. How can we recover what training data is known to LLMs? In this work, we demonstrate a new method to identify training data known to proprietary LLMs like GPT-4 without requiring any access to model weights or token probabilities, by using information-guided probes. Our work builds on a key observation: text passages with high surprisal are good search material for memorization probes. By evaluating a model's ability to successfully reconstruct high-surprisal tokens in text, we can identify a surprising number of texts memorized by LLMs.
Problem

Research questions and friction points this paper is trying to address.

Identify training data in proprietary LLMs without access to model internals.
Address lack of transparency in LLM training data for oversight and research.
Detect memorized texts in LLMs using high-surprisal token reconstruction.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses information-guided probes for data identification
Identifies memorized texts via high-surprisal tokens
Works without accessing model weights or probabilities
🔎 Similar Papers
No similar papers found.