🤖 AI Summary
To address copyright infringement and test-set contamination risks arising from training data leakage in large language models (LLMs), this paper proposes a theory-driven, reference-free detection method. Grounded in the statistical property that maximum-likelihood training renders genuine samples local maxima across all dimensions of the output conditional distribution, we formulate leakage detection as a local pattern recognition problem in discrete space. Our approach directly constructs the conditional classification distribution from the target LLM’s own outputs—requiring no auxiliary reference model. Evaluated on WikiMIA, it achieves AUROC gains of 6.2–10.5% over prior methods. On the more challenging MIMIR benchmark, it outperforms all existing reference-free approaches and matches state-of-the-art reference-dependent methods—marking the first demonstration of both high detection accuracy and strict reference-freeness.
📝 Abstract
The problem of pre-training data detection for large language models (LLMs) has received growing attention due to its implications in critical issues like copyright violation and test data contamination. Despite improved performance, existing methods (including the state-of-the-art, Min-K%) are mostly developed upon simple heuristics and lack solid, reasonable foundations. In this work, we propose a novel and theoretically motivated methodology for pre-training data detection, named Min-K%++. Specifically, we present a key insight that training samples tend to be local maxima of the modeled distribution along each input dimension through maximum likelihood training, which in turn allow us to insightfully translate the problem into identification of local maxima. Then, we design our method accordingly that works under the discrete distribution modeled by LLMs, whose core idea is to determine whether the input forms a mode or has relatively high probability under the conditional categorical distribution. Empirically, the proposed method achieves new SOTA performance across multiple settings. On the WikiMIA benchmark, Min-K%++ outperforms the runner-up by 6.2% to 10.5% in detection AUROC averaged over five models. On the more challenging MIMIR benchmark, it consistently improves upon reference-free methods while performing on par with reference-based method that requires an extra reference model.