Saber: An Efficient Sampling with Adaptive Acceleration and Backtracking Enhanced Remasking for Diffusion Language Model

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion language models (DLMs) for code generation suffer from a fundamental trade-off between inference speed and generation quality: reducing sampling steps typically incurs severe performance degradation. To address this, we propose Saber, a training-free sampling algorithm that jointly introduces adaptive acceleration and error-aware backtracking-enhanced remasking—marking the first integration of these mechanisms in DLM decoding. Saber dynamically adjusts step sizes and identifies/corrects erroneous tokens, overcoming the limitations of fixed-step sampling. It further incorporates context-aware remasking and backtracking-based recovery, improving accuracy and efficiency without modifying model parameters. Experiments across major code generation benchmarks demonstrate that Saber achieves an average 1.9% absolute improvement in Pass@1 while accelerating inference by 251.4%, substantially narrowing the performance gap between DLMs and autoregressive models.

Technology Category

Application Category

📝 Abstract
Diffusion language models (DLMs) are emerging as a powerful and promising alternative to the dominant autoregressive paradigm, offering inherent advantages in parallel generation and bidirectional context modeling. However, the performance of DLMs on code generation tasks, which have stronger structural constraints, is significantly hampered by the critical trade-off between inference speed and output quality. We observed that accelerating the code generation process by reducing the number of sampling steps usually leads to a catastrophic collapse in performance. In this paper, we introduce efficient Sampling with Adaptive acceleration and Backtracking Enhanced Remasking (i.e., Saber), a novel training-free sampling algorithm for DLMs to achieve better inference speed and output quality in code generation. Specifically, Saber is motivated by two key insights in the DLM generation process: 1) it can be adaptively accelerated as more of the code context is established; 2) it requires a backtracking mechanism to reverse the generated tokens. Extensive experiments on multiple mainstream code generation benchmarks show that Saber boosts Pass@1 accuracy by an average improvement of 1.9% over mainstream DLM sampling methods, meanwhile achieving an average 251.4% inference speedup. By leveraging the inherent advantages of DLMs, our work significantly narrows the performance gap with autoregressive models in code generation.
Problem

Research questions and friction points this paper is trying to address.

Improves code generation speed and quality trade-off in diffusion language models
Addresses performance collapse when accelerating sampling steps in DLMs
Enables adaptive acceleration and backtracking for structural code constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive acceleration during code context establishment
Backtracking mechanism to reverse generated tokens
Training-free sampling algorithm for diffusion language models
Yihong Dong
Yihong Dong
Peking University
Code GenerationLarge Language Models
Z
Zhaoyu Ma
School of Computer Science, Peking University
X
Xue Jiang
School of Computer Science, Peking University
Zhiyuan Fan
Zhiyuan Fan
PhD Student, MIT
reinforcement learningcomputational game theory
J
Jiaru Qian
School of Computer Science, Peking University
Y
Yongmin Li
School of Computer Science, Peking University
J
Jianha Xiao
School of Computer Science, Peking University
Zhi Jin
Zhi Jin
Sun Yat-Sen University, Associate Professor
Rongyu Cao
Rongyu Cao
Chinese Academy of Sciences
data minining
B
Binhua Li
Tongyi Lab, Alibaba Group
F
Fei Huang
Tongyi Lab, Alibaba Group
Y
Yongbin Li
Tongyi Lab, Alibaba Group
Ge Li
Ge Li
Full Professor of Computer Science, Peking University
Program AnalysisProgram GenerationDeep Learning