๐ค AI Summary
This work addresses a critical limitation in existing methods for detecting pretraining data of large language models under black-box, zero-shot settings: the neglect of the dynamic variation of information entropy across token positions during generation. To this end, we propose Positional Decay Reweighting (PDR), a training-free, plug-and-play framework that leverages the previously unexploited observation that memorization signals are strongest at the beginning of a sequence and decay as context accumulates. PDR employs an information entropyโguided dynamic weighting mechanism to amplify high-entropy signals from initial positions while suppressing noise in later tokens. As a general-purpose prior module, PDR consistently enhances the performance of multiple state-of-the-art detection methods across diverse benchmarks, demonstrating its effectiveness and robustness.
๐ Abstract
Detecting pre-training data in Large Language Models (LLMs) is crucial for auditing data privacy and copyright compliance, yet it remains challenging in black-box, zero-shot settings where computational resources and training data are scarce. While existing likelihood-based methods have shown promise, they typically aggregate token-level scores using uniform weights, thereby neglecting the inherent information-theoretic dynamics of autoregressive generation. In this paper, we hypothesize and empirically validate that memorization signals are heavily skewed towards the high-entropy initial tokens, where model uncertainty is highest, and decay as context accumulates. To leverage this linguistic property, we introduce Positional Decay Reweighting (PDR), a training-free and plug-and-play framework. PDR explicitly reweights token-level scores to amplify distinct signals from early positions while suppressing noise from later ones. Extensive experiments show that PDR acts as a robust prior and can usually enhance a wide range of advanced methods across multiple benchmarks.