🤖 AI Summary
Existing methods struggle to efficiently assess the privacy and copyright risks posed by approximate verbatim extraction in language models due to the vast combinatorial space of possible suffixes, rendering conventional Monte Carlo estimation computationally prohibitive. This work proposes a decoding-constrained beam search method that, at a computational cost comparable to approximately 20 Monte Carlo samples, provides the first efficient and deterministic lower-bound estimate of such extraction risks. The approach substantially increases both the number of discoverable extractable sequences and the per-sequence extraction probability mass across diverse models and text types. Furthermore, it reveals systematic scaling trends in extraction risk with model size and uncovers systemic vulnerabilities that traditional exact-match detection methods fail to capture.
📝 Abstract
Recent work shows that standard greedy-decoding extraction methods for quantifying memorization in LLMs miss how extraction risk varies across sequences. Probabilistic extraction -- computing the probability of generating a target suffix given a prefix under a decoding scheme -- addresses this, but is tractable only for verbatim memorization, missing near-verbatim instances that pose similar privacy and copyright risks. Quantifying near-verbatim extraction risk is expensive: the set of near-verbatim suffixes is combinatorially large, and reliable Monte Carlo (MC) estimation can require ~100,000 samples per sequence. To mitigate this cost, we introduce decoding-constrained beam search, which yields deterministic lower bounds on near-verbatim extraction risk at a cost comparable to ~20 MC samples per sequence. Across experiments, our approach surfaces information invisible to verbatim methods: many more extractable sequences, substantially larger per-sequence extraction mass, and patterns in how near-verbatim extraction risk manifests across model sizes and types of text.