Identifying Pre-training Data in LLMs: A Neuron Activation-Based Detection Framework

📅 2025-07-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address legal and ethical risks—such as copyright infringement, privacy leakage, and bias—arising from contaminated pretraining data in large language models (LLMs), this paper proposes a neuron-activation-difference-based data provenance method. Our core insight is the first use of *discriminative and temporally unbiased neuron activation patterns* elicited during inference by training versus non-training samples. We introduce CCNewsPDD, a novel benchmark for data provenance detection. The method integrates differential activation analysis with robust data transformation, achieving significant improvements across multiple mainstream LLMs (e.g., Llama, Qwen) and three benchmarks: average detection accuracy improves by 12.6% over state-of-the-art methods, while maintaining strong robustness against fine-tuning and prompt perturbations. This work establishes an interpretable, reproducible paradigm for LLM training-data traceability.

Technology Category

Application Category

📝 Abstract
The performance of large language models (LLMs) is closely tied to their training data, which can include copyrighted material or private information, raising legal and ethical concerns. Additionally, LLMs face criticism for dataset contamination and internalizing biases. To address these issues, the Pre-Training Data Detection (PDD) task was proposed to identify if specific data was included in an LLM's pre-training corpus. However, existing PDD methods often rely on superficial features like prediction confidence and loss, resulting in mediocre performance. To improve this, we introduce NA-PDD, a novel algorithm analyzing differential neuron activation patterns between training and non-training data in LLMs. This is based on the observation that these data types activate different neurons during LLM inference. We also introduce CCNewsPDD, a temporally unbiased benchmark employing rigorous data transformations to ensure consistent time distributions between training and non-training data. Our experiments demonstrate that NA-PDD significantly outperforms existing methods across three benchmarks and multiple LLMs.
Problem

Research questions and friction points this paper is trying to address.

Detect if specific data was used in LLM pre-training
Improve accuracy of pre-training data detection methods
Address legal and ethical concerns about LLM training data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neuron activation-based detection framework
Differential neuron activation pattern analysis
Temporally unbiased benchmark CCNewsPDD
🔎 Similar Papers
No similar papers found.
H
Hongyi Tang
The Hong Kong University of Science and Technology
Zhihao Zhu
Zhihao Zhu
University of Science and Technology of China
Machine Learning PrivacyRecommender SystemGraph Neural Network
Y
Yi Yang
The Hong Kong University of Science and Technology