🤖 AI Summary
Diffusion large language models (LLMs) struggle to satisfy formal language syntax constraints (e.g., C++, JSON), limiting their applicability to structured generation tasks. Method: This work introduces constraint decoding to the diffusion LLM paradigm for the first time, proposing a general additive padding framework grounded in context-free grammars (CFGs). It leverages the emptiness check of CFG–regular language intersection, combined with dynamic programming and finite automata, to enable efficient constraint propagation and decoding search—supporting both single- and multi-region structured generation uniformly. Contribution/Results: Experiments on C++ code completion and JSON data generation demonstrate near-perfect syntactic correctness (∼100%), while functional correctness is maintained or improved; inference overhead remains moderate. This is the first efficient, general, and theoretically sound constraint decoding method for diffusion LLMs, enabling reliable structured output generation.
📝 Abstract
Large language models (LLMs) have shown promising performance across diverse domains. Many practical applications of LLMs, such as code completion and structured data extraction, require adherence to syntactic constraints specified by a formal language. Yet, due to their probabilistic nature, LLM output is not guaranteed to adhere to such formal languages. Prior work has proposed constrained decoding as a means to restrict LLM generation to particular formal languages. However, existing works are not applicable to the emerging paradigm of diffusion LLMs, when used in practical scenarios such as the generation of formally correct C++ or JSON output. In this paper we address this challenge and present the first constrained decoding method for diffusion models, one that can handle formal languages captured by context-free grammars. We begin by reducing constrained decoding to the more general additive infilling problem, which asks whether a partial output can be completed to a valid word in the target language. This problem also naturally subsumes the previously unaddressed multi-region infilling constrained decoding. We then reduce this problem to the task of deciding whether the intersection of the target language and a regular language is empty and present an efficient algorithm to solve it for context-free languages. Empirical results on various applications, such as C++ code infilling and structured data extraction in JSON, demonstrate that our method achieves near-perfect syntactic correctness while consistently preserving or improving functional correctness. Importantly, our efficiency optimizations ensure that the computational overhead remains practical.