Improving Variable-Length Generation in Diffusion Language Models via Length Regularization

📅 2026-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion language models often suffer from confidence bias under the fixed-canvas assumption when the target sequence length is unknown, leading to either insufficient or redundant generation. This work proposes LR-DLLM, a framework that, for the first time, explicitly models generation length as a variable during inference. By introducing a length-regularized formulation, it decouples semantic compatibility from length uncertainty, enabling dynamic variable-length generation without any architectural modifications or retraining. Evaluated on HumanEvalInfilling, LR-DLLM achieves a Pass@1 score of 51.3%, a 13.4% absolute improvement over the DreamOn baseline, and demonstrates consistent gains across four languages on McEval, raising the average performance from 37.2% to 51.5%—a 14.3% absolute increase.

Technology Category

Application Category

📝 Abstract
Diffusion Large Language Models (DLLMs) are inherently ill-suited for variable-length generation, as their inference is defined on a fixed-length canvas and implicitly assumes a known target length. When the length is unknown, as in realistic completion and infilling, naively comparing confidence across mask lengths becomes systematically biased, leading to under-generation or redundant continuations. In this paper, we show that this failure arises from an intrinsic lengthinduced bias in generation confidence estimates, leaving existing DLLMs without a robust way to determine generation length and making variablelength inference unreliable. To address this issue, we propose LR-DLLM, a length-regularized inference framework for DLLMs that treats generation length as an explicit variable and achieves reliable length determination at inference time. It decouples semantic compatibility from lengthinduced uncertainty through an explicit length regularization that corrects biased confidence estimates. Based on this, LR-DLLM enables dynamic expansion or contraction of the generation span without modifying the underlying DLLM or its training procedure. Experiments show that LRDLLM achieves 51.3% Pass@1 on HumanEvalInfilling under fully unknown lengths (+13.4% vs. DreamOn) and 51.5% average Pass@1 on four-language McEval (+14.3% vs. DreamOn).
Problem

Research questions and friction points this paper is trying to address.

variable-length generation
diffusion language models
length bias
generation confidence
infilling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Length Regularization
Diffusion Language Models
Variable-Length Generation
Inference Framework
Confidence Calibration