DreamOn: Diffusion Language Models For Code Infilling Beyond Fixed-size Canvas

📅 2026-02-01
📈 Citations: 14
Influential: 1
📄 PDF
🤖 AI Summary
Existing diffusion-based language models for code infilling are constrained by fixed-length masks, limiting their ability to generate variable-length outputs. This work proposes DreamOn, a framework that introduces two lightweight length-control states into the diffusion process, enabling dynamic adjustment of the generated sequence length without modifying the underlying model architecture. Experiments on DreamCoder-7B and DiffuCoder-7B demonstrate that DreamOn achieves performance comparable to state-of-the-art autoregressive models on the HumanEval-Infilling and SantaCoder-FIM benchmarks, while closely approaching the ideal performance attainable when the ground-truth target length is known. These results significantly enhance the practicality of diffusion models for real-world code infilling tasks.

Technology Category

Application Category

📝 Abstract
Diffusion Language Models (DLMs) present a compelling alternative to autoregressive models, offering flexible, any-order infilling without specialized prompting design. However, their practical utility is blocked by a critical limitation: the requirement of a fixed-length masked sequence for generation. This constraint severely degrades code infilling performance when the predefined mask size mismatches the ideal completion length. To address this, we propose DreamOn, a novel diffusion framework that enables dynamic, variable-length generation. DreamOn augments the diffusion process with two length control states, allowing the model to autonomously expand or contract the output length based solely on its own predictions. We integrate this mechanism into existing DLMs with minimal modifications to the training objective and no architectural changes. Built upon Dream-Coder-7B and DiffuCoder-7B, DreamOn achieves infilling performance on par with state-of-the-art autoregressive models on HumanEval-Infilling and SantaCoder-FIM and matches oracle performance achieved with ground-truth length. Our work removes a fundamental barrier to the practical deployment of DLMs, significantly advancing their flexibility and applicability for variable-length generation. Our code is available at https://github.com/DreamLM/DreamOn.
Problem

Research questions and friction points this paper is trying to address.

Diffusion Language Models
Code Infilling
Fixed-length Mask
Variable-length Generation
Length Mismatch
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion Language Models
Code Infilling
Variable-length Generation
Length Control
DreamOn