🤖 AI Summary
To address the high inference latency of large-scale diffusion language models—hindering real-time code generation—this paper proposes a non-autoregressive, parallel generation framework based on discrete-state diffusion. Departing from conventional token-by-token autoregressive decoding, our approach models latent states discretely and enables full-sequence parallel sampling, substantially improving inference throughput. On an H20 GPU, it achieves 2,146 tokens/sec—the fastest reported inference speed for diffusion-based code generation models to date—and is the first to exceed 2,000 tokens/sec. It maintains state-of-the-art functional correctness on major code-generation benchmarks (HumanEval, MBPP), establishing a new speed–quality Pareto frontier. Compared to Mercury and Gemini Diffusion under identical hardware conditions, our method delivers significantly higher throughput while preserving or improving generation quality.
📝 Abstract
We present Seed Diffusion Preview, a large-scale language model based on discrete-state diffusion, offering remarkably fast inference speed. Thanks to non-sequential, parallel generation, discrete diffusion models provide a notable speedup to mitigate the inherent latency of token-by-token decoding, as demonstrated recently (e.g., Mercury Coder, Gemini Diffusion). Seed Diffusion Preview achieves an inference speed of 2,146 token/s over H20 GPUs while maintaining competitive performance across a sweep of standard code evaluation benchmarks, significantly faster than contemporary Mercury and Gemini Diffusion, establishing new state of the art on the speed-quality Pareto frontier for code models.