π€ AI Summary
This work addresses the limitation of existing masked diffusion-based large language models, which require a fixed generation length and thus struggle to balance output quality with inference efficiency. The authors propose a training-free, single-stage length-adaptive strategy that leverages the dynamic evolution of implicit end-of-sequence (EOS) token density during the denoising process as a signal of generation adequacy. This signal dynamically guides the contraction or expansion of masked regions, enabling bidirectional variable-length generation within a unified denoising framework. To the best of the authorsβ knowledge, this is the first approach to utilize implicit EOS density for controlling generation length. Empirical results demonstrate that the method achieves comparable performance on mathematical and code benchmarks while significantly improving inference efficiency and token utilization.
π Abstract
Beyond parallel generation and global context modeling, current masked diffusion large language models (masked dLLMs, i.e., LLaDA) suffer from a fundamental limitation: they require a predefined, fixed generation length, which lacks flexibility and forces an inevitable trade-off between output quality and computational efficiency. To address this, we study the denoising dynamics and find that the implicit density ($\rho$) of end-of-sequence ($\texttt{EOS}$) tokens serves as a reliable signal of generation sufficiency. In particular, the evolving implicit $\texttt{EOS}$ density during denoising reveals whether the current masked space is excessive or insufficient, thereby guiding the adjustment direction for generation length. Building on this insight, we propose $\textbf{$\rho$-$\texttt{EOS}$}$, a training-free, single-stage strategy that enables bidirectional variable-length generation for masked dLLMs. Unlike prior two-stage approaches--which require separate length adjustment and iterative mask insertion phases while supporting only unidirectional expansion--$\textbf{$\rho$-$\texttt{EOS}$}$ achieves bidirectional length adjustment within a unified denoising process by continuously estimating the implicit $\texttt{EOS}$ density: excessively high density triggers $\texttt{MASK}$ token contraction, while insufficient density induces expansion. Extensive experiments on mathematics and code benchmarks demonstrate that $\textbf{$\rho$-$\texttt{EOS}$}$ achieves comparable performance while substantially improving inference efficiency and token utilization. Code is available at https://github.com/yjyddq/rho-EOS.