DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the opaque decoding mechanisms and unstable reinforcement learning training in diffusion-based large language models (dLLMs) for code generation, this paper proposes DiffuCoder (7B), a masked diffusion architecture that systematically uncovers its global planning and iterative refinement capabilities. We introduce GRPO—a coupled sampling algorithm that reduces policy optimization variance via complementary mask-noise injection—enabling diffusion-native, causally weakly dependent, and efficient training. Our approach integrates decoding-behavior analysis, temperature-adjusted sampling, and large-scale code pretraining. On the EvalPlus benchmark, DiffuCoder achieves a 4.4% absolute improvement over strong baselines, markedly enhancing generation diversity and search efficiency. This work provides the first empirical evidence of structural advantages of diffusion models over autoregressive models in code generation, establishing a new paradigm for controllable, high-quality program synthesis.

Technology Category

Application Category

📝 Abstract
Diffusion large language models (dLLMs) are compelling alternatives to autoregressive (AR) models because their denoising models operate over the entire sequence. The global planning and iterative refinement features of dLLMs are particularly useful for code generation. However, current training and inference mechanisms for dLLMs in coding are still under-explored. To demystify the decoding behavior of dLLMs and unlock their potential for coding, we systematically investigate their denoising processes and reinforcement learning (RL) methods. We train a 7B dLLM, extbf{DiffuCoder}, on 130B tokens of code. Using this model as a testbed, we analyze its decoding behavior, revealing how it differs from that of AR models: (1) dLLMs can decide how causal their generation should be without relying on semi-AR decoding, and (2) increasing the sampling temperature diversifies not only token choices but also their generation order. This diversity creates a rich search space for RL rollouts. For RL training, to reduce the variance of token log-likelihood estimates and maintain training efficiency, we propose extbf{coupled-GRPO}, a novel sampling scheme that constructs complementary mask noise for completions used in training. In our experiments, coupled-GRPO significantly improves DiffuCoder's performance on code generation benchmarks (+4.4% on EvalPlus) and reduces reliance on AR causal during decoding. Our work provides deeper insight into the machinery of dLLM generation and offers an effective, diffusion-native RL training framework. https://github.com/apple/ml-diffucoder.
Problem

Research questions and friction points this paper is trying to address.

Exploring dLLMs for better code generation than AR models
Analyzing denoising processes and RL methods in dLLMs
Improving code generation performance with coupled-GRPO
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses masked diffusion models for code generation
Proposes coupled-GRPO for RL training
Enhances decoding diversity and efficiency