Lookahead-then-Verify: Reliable Constrained Decoding for Diffusion LLMs under Context-Free Grammars

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion-based large language models (dLLMs) struggle to guarantee syntactic correctness when generating context-free grammars such as code or chemical formulas. This work proposes LAVE, the first constraint decoding framework tailored for dLLMs that balances both reliability and efficiency. Leveraging the inherent parallel token prediction capability of dLLMs, LAVE performs lookahead validation at every position during each non-autoregressive generation step, ensuring that intermediate outputs remain extendable into valid sentences. Experimental results across four prominent dLLMs and three benchmarks demonstrate that LAVE significantly improves syntactic correctness over existing methods while introducing negligible computational overhead.

Technology Category

Application Category

📝 Abstract
Diffusion Large Language Models (dLLMs) have demonstrated promising generative capabilities and are increasingly used to produce formal languages defined by context-free grammars, such as source code and chemical expressions. However, as probabilistic models, they still struggle to generate syntactically valid outputs reliably. A natural and promising direction to address this issue is to adapt constrained decoding techniques to enforce grammatical correctness during generation. However, applying these techniques faces two primary obstacles. On the one hand, the non-autoregressive nature of dLLMs renders most existing constrained decoding approaches inapplicable. On the other hand, current approaches specifically designed for dLLMs may allow intermediate outputs that are impossible to complete into valid sentences, which significantly limits their reliability in practice. To address these challenges, we present LAVE, a constrained decoding approach specifically designed for dLLMs. Our approach leverages a key property of dLLMs, namely their ability to predict token distributions for all positions in parallel during each forward pass. Whenever a new token is proposed by model, LAVE performs lookahead using these distributions to efficiently and reliably verify the validity of the proposed token. This design ensures reliable constraints by reliably preserving the potential for intermediate outputs to be extended into valid sentences. Extensive experiments across four widely used dLLMs and three representative benchmarks demonstrate that LAVE consistently outperforms existing baselines and achieves substantial improvements in syntactic correctness, while incurring negligible runtime overhead.
Problem

Research questions and friction points this paper is trying to address.

Diffusion LLMs
constrained decoding
context-free grammars
syntactic validity
non-autoregressive generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

constrained decoding
diffusion LLMs
context-free grammars
lookahead verification
syntactic correctness
🔎 Similar Papers
No similar papers found.