🤖 AI Summary
This work addresses the challenges of copyright disputes and benchmark contamination in pretraining data detection for large language models by proposing Gradient Deviation Scoring (GDS). GDS systematically leverages gradient dynamics during training—specifically, the evolution of sample familiarity—to perform membership inference based on characteristics such as the magnitude and position of gradient updates and the concentration of neuron activations. By constructing lightweight binary classifiers from gradient profiles of Feed-Forward Network (FFN) and Attention modules, the method achieves state-of-the-art performance across five public datasets, significantly outperforming existing baselines. Moreover, GDS demonstrates superior cross-dataset generalization and enhanced interpretability, offering a principled and effective approach to identifying whether a given sample was part of a model’s training data.
📝 Abstract
Pre-training data detection for LLMs is essential for addressing copyright concerns and mitigating benchmark contamination. Existing methods mainly focus on the likelihood-based statistical features or heuristic signals before and after fine-tuning, but the former are susceptible to word frequency bias in corpora, and the latter strongly depend on the similarity of fine-tuning data. From an optimization perspective, we observe that during training, samples transition from unfamiliar to familiar in a manner reflected by systematic differences in gradient behavior. Familiar samples exhibit smaller update magnitudes, distinct update locations in model components, and more sharply activated neurons. Based on this insight, we propose GDS, a method that identifies pre-training data by probing Gradient Deviation Scores of target samples. Specifically, we first represent each sample using gradient profiles that capture the magnitude, location, and concentration of parameter updates across FFN and Attention modules, revealing consistent distinctions between member and non-member data. These features are then fed into a lightweight classifier to perform binary membership inference. Experiments on five public datasets show that GDS achieves state-of-the-art performance with significantly improved cross-dataset transferability over strong baselines. Further interpretability analyse show gradient feature distribution differences, enabling practical and scalable pre-training data detection.