🤖 AI Summary
This work addresses the unclear memorization behavior of diffusion language models (DLMs), which poses potential privacy and copyright risks. The authors propose a unified probabilistic extraction framework that formalizes both prefix-conditioned decoding and diffusion-based generation as instances of a general process under arbitrary masking patterns and stochastic sampling trajectories. They establish, for the first time, a monotonic relationship between sampling resolution and memorization strength, and demonstrate that autoregressive decoding emerges as the limiting case of diffusion generation at maximal resolution, thereby unifying the two generative paradigms. Through theoretical analysis (e.g., Theorem 4.3), cross-scale model experiments, and prefix-conditioned evaluations across diverse sampling strategies, the study shows that DLMs significantly reduce memorization leakage of personally identifiable information (PII) under identical conditions, confirming their inherent privacy advantage.
📝 Abstract
Autoregressive language models (ARMs) have been shown to memorize and occasionally reproduce training data verbatim, raising concerns about privacy and copyright liability. Diffusion language models (DLMs) have recently emerged as a competitive alternative, yet their memorization behavior remains largely unexplored due to fundamental differences in generation dynamics. To address this gap, we present a systematic theoretical and empirical characterization of memorization in DLMs. We propose a generalized probabilistic extraction framework that unifies prefix-conditioned decoding and diffusion-based generation under arbitrary masking patterns and stochastic sampling trajectories. Theorem 4.3 establishes a monotonic relationship between sampling resolution and memorization: increasing resolution strictly increases the probability of exact training data extraction, implying that autoregressive decoding corresponds to a limiting case of diffusion-based generation by setting the sampling resolution maximal. Extensive experiments across model scales and sampling strategies validate our theoretical predictions. Under aligned prefix-conditioned evaluations, we further demonstrate that DLMs exhibit substantially lower memorization-based leakage of personally identifiable information (PII) compared to ARMs.