PDR: A Plug-and-Play Positional Decay Framework for LLM Pre-training Data Detection

๐Ÿ“… 2026-01-11
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses a critical limitation in existing methods for detecting pretraining data of large language models under black-box, zero-shot settings: the neglect of the dynamic variation of information entropy across token positions during generation. To this end, we propose Positional Decay Reweighting (PDR), a training-free, plug-and-play framework that leverages the previously unexploited observation that memorization signals are strongest at the beginning of a sequence and decay as context accumulates. PDR employs an information entropyโ€“guided dynamic weighting mechanism to amplify high-entropy signals from initial positions while suppressing noise in later tokens. As a general-purpose prior module, PDR consistently enhances the performance of multiple state-of-the-art detection methods across diverse benchmarks, demonstrating its effectiveness and robustness.

Technology Category

Application Category

๐Ÿ“ Abstract
Detecting pre-training data in Large Language Models (LLMs) is crucial for auditing data privacy and copyright compliance, yet it remains challenging in black-box, zero-shot settings where computational resources and training data are scarce. While existing likelihood-based methods have shown promise, they typically aggregate token-level scores using uniform weights, thereby neglecting the inherent information-theoretic dynamics of autoregressive generation. In this paper, we hypothesize and empirically validate that memorization signals are heavily skewed towards the high-entropy initial tokens, where model uncertainty is highest, and decay as context accumulates. To leverage this linguistic property, we introduce Positional Decay Reweighting (PDR), a training-free and plug-and-play framework. PDR explicitly reweights token-level scores to amplify distinct signals from early positions while suppressing noise from later ones. Extensive experiments show that PDR acts as a robust prior and can usually enhance a wide range of advanced methods across multiple benchmarks.
Problem

Research questions and friction points this paper is trying to address.

pre-training data detection
data privacy
copyright compliance
black-box setting
zero-shot setting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Positional Decay Reweighting
pre-training data detection
token-level reweighting
black-box auditing
information-theoretic dynamics
๐Ÿ”Ž Similar Papers
No similar papers found.