Finish First, Perfect Later: Test-Time Token-Level Cross-Validation for Diffusion Large Language Models

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion large language models (dLLMs) suffer from unidirectional, progressive decoding—where accepted tokens cannot be revised—leading to error propagation and compounding early mistakes. To address this, we propose a training-free, two-stage decoding strategy: first, full-sequence token filling; second, iterative refinement via token-level cross-validation, dynamically re-masking and regenerating critical positions. Our method integrates parallel diffusion generation, bidirectional contextual modeling, and dynamic subset regeneration—enhancing inference robustness without additional training overhead. Evaluated across five benchmarks spanning language understanding, code generation, and mathematical reasoning, the approach consistently outperforms baseline dLLMs under identical computational budgets. These results validate the effectiveness and generality of correctable decoding as a novel paradigm for diffusion-based language modeling.

Technology Category

Application Category

📝 Abstract
Diffusion large language models (dLLMs) have recently emerged as a promising alternative to autoregressive (AR) models, offering advantages such as accelerated parallel decoding and bidirectional context modeling. However, the vanilla decoding strategy in discrete dLLMs suffers from a critical limitation: once a token is accepted, it can no longer be revised in subsequent steps. As a result, early mistakes persist across iterations, harming both intermediate predictions and final output quality. To address this issue, we propose Tolerator (Token-Level Cross-Validation Refinement), a training-free decoding strategy that leverages cross-validation among predicted tokens. Unlike existing methods that follow a single progressive unmasking procedure, Tolerator introduces a two-stage process: (i) sequence fill-up and (ii) iterative refinement by remasking and decoding a subset of tokens while treating the remaining as context. This design enables previously accepted tokens to be reconsidered and corrected when necessary, leading to more reliable diffusion decoding outputs. We evaluate Tolerator on five standard benchmarks covering language understanding, code generation, and mathematics. Experiments show that our method achieves consistent improvements over the baselines under the same computational budget. These findings suggest that decoding algorithms are crucial to realizing the full potential of diffusion large language models. Code and data are publicly available.
Problem

Research questions and friction points this paper is trying to address.

Addresses irreversible token errors in diffusion language models
Enables revision of previously accepted tokens during decoding
Improves output reliability through iterative refinement process
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage decoding with fill-up and refinement
Cross-validation among predicted tokens for correction
Training-free strategy remasking subset of tokens
🔎 Similar Papers
No similar papers found.