🤖 AI Summary
Diffusion language models (DLMs) for code generation suffer from a fundamental trade-off between inference speed and generation quality: reducing sampling steps typically incurs severe performance degradation. To address this, we propose Saber, a training-free sampling algorithm that jointly introduces adaptive acceleration and error-aware backtracking-enhanced remasking—marking the first integration of these mechanisms in DLM decoding. Saber dynamically adjusts step sizes and identifies/corrects erroneous tokens, overcoming the limitations of fixed-step sampling. It further incorporates context-aware remasking and backtracking-based recovery, improving accuracy and efficiency without modifying model parameters. Experiments across major code generation benchmarks demonstrate that Saber achieves an average 1.9% absolute improvement in Pass@1 while accelerating inference by 251.4%, substantially narrowing the performance gap between DLMs and autoregressive models.
📝 Abstract
Diffusion language models (DLMs) are emerging as a powerful and promising alternative to the dominant autoregressive paradigm, offering inherent advantages in parallel generation and bidirectional context modeling. However, the performance of DLMs on code generation tasks, which have stronger structural constraints, is significantly hampered by the critical trade-off between inference speed and output quality. We observed that accelerating the code generation process by reducing the number of sampling steps usually leads to a catastrophic collapse in performance. In this paper, we introduce efficient Sampling with Adaptive acceleration and Backtracking Enhanced Remasking (i.e., Saber), a novel training-free sampling algorithm for DLMs to achieve better inference speed and output quality in code generation. Specifically, Saber is motivated by two key insights in the DLM generation process: 1) it can be adaptively accelerated as more of the code context is established; 2) it requires a backtracking mechanism to reverse the generated tokens. Extensive experiments on multiple mainstream code generation benchmarks show that Saber boosts Pass@1 accuracy by an average improvement of 1.9% over mainstream DLM sampling methods, meanwhile achieving an average 251.4% inference speedup. By leveraging the inherent advantages of DLMs, our work significantly narrows the performance gap with autoregressive models in code generation.