Rejection Mixing: Fast Semantic Propagation of Mask Tokens for Efficient DLLM Inference

πŸ“… 2026-02-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Diffusion-based large language models (DLLMs) often suffer from semantic inconsistencies during parallel decoding due to β€œcombinatorial contradictions,” making it challenging to balance generation quality with inference speed. This work proposes ReMix, a training-free framework that, for the first time, introduces continuous mixed states as intermediate representations in discrete diffusion decoding. By iteratively refining token-level semantics in a continuous space and employing a rejection-and-fallback mechanism, ReMix effectively resolves inter-token semantic conflicts. Integrated with non-autoregressive parallel decoding, the method achieves 2–8Γ— inference acceleration while preserving generation quality without degradation.

Technology Category

Application Category

πŸ“ Abstract
Diffusion Large Language Models (DLLMs) promise fast non-autoregressive inference but suffer a severe quality-speed trade-off in parallel decoding. This stems from the ''combinatorial contradiction'' phenomenon, where parallel tokens form semantically inconsistent combinations. We address this by integrating continuous representations into the discrete decoding process, as they preserve rich inter-position dependency. We propose ReMix (Rejection Mixing), a framework that introduces a novel Continuous Mixing State as an intermediate between the initial masked state and the final decoded token state. This intermediate state allows a token's representation to be iteratively refined in a continuous space, resolving mutual conflicts with other tokens before collapsing into a final discrete sample. Furthermore, a rejection rule reverts uncertain representations from the continuous state back to the masked state for reprocessing, ensuring stability and preventing error propagation. ReMix thus mitigates combinatorial contradictions by enabling continuous-space refinement during discrete diffusion decoding. Extensive experiments demonstrate that ReMix, as a training-free method, achieves a $2-8 \times$ inference speedup without any quality degradation.
Problem

Research questions and friction points this paper is trying to address.

Diffusion Large Language Models
combinatorial contradiction
parallel decoding
semantic inconsistency
non-autoregressive inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rejection Mixing
Diffusion Large Language Models
Non-autoregressive Inference
Continuous Mixing State
Combinatorial Contradiction
πŸ”Ž Similar Papers
No similar papers found.