🤖 AI Summary
Diffusion Large Language Models (DLLMs) suffer from fixed, pre-specified generation lengths: overly short sequences impair performance on complex tasks, while excessively long ones incur unnecessary computational overhead and may degrade output quality. To address this, we propose a dynamic, adaptive denoising strategy that enables flexible, training-free extension of generation length. Our method dynamically assesses sequence completion in real time during denoising and triggers length expansion early in the process when incompleteness is detected. It then performs localized expansion by precisely inserting mask tokens into incomplete regions. Coupled with a staged denoising mechanism, the approach supports variable-length outputs. Evaluated across multiple tasks, our method matches or surpasses fixed-length baselines, achieving higher token utilization and improved generation efficiency—demonstrating both superior performance and computational economy.
📝 Abstract
Diffusion Large Language Models (DLLMs) are emerging as a powerful alternative to the dominant Autoregressive Large Language Models, offering efficient parallel generation and capable global context modeling. However, the practical application of DLLMs is hindered by a critical architectural constraint: the need for a statically predefined generation length. This static length allocation leads to a problematic trade-off: insufficient lengths cripple performance on complex tasks, while excessive lengths incur significant computational overhead and sometimes result in performance degradation. While the inference framework is rigid, we observe that the model itself possesses internal signals that correlate with the optimal response length for a given task. To bridge this gap, we leverage these latent signals and introduce DAEDAL, a novel training-free denoising strategy that enables Dynamic Adaptive Length Expansion for Diffusion Large Language Models. DAEDAL operates in two phases: 1) Before the denoising process, DAEDAL starts from a short initial length and iteratively expands it to a coarse task-appropriate length, guided by a sequence completion metric. 2) During the denoising process, DAEDAL dynamically intervenes by pinpointing and expanding insufficient generation regions through mask token insertion, ensuring the final output is fully developed. Extensive experiments on DLLMs demonstrate that DAEDAL achieves performance comparable, and in some cases superior, to meticulously tuned fixed-length baselines, while simultaneously enhancing computational efficiency by achieving a higher effective token ratio. By resolving the static length constraint, DAEDAL unlocks new potential for DLLMs, bridging a critical gap with their Autoregressive counterparts and paving the way for more efficient and capable generation.