🤖 AI Summary
Existing masked diffusion language models (MDLMs) suffer from inefficient parallel decoding, as their sampling speed is constrained by heuristic or offline distillation strategies (e.g., dParallel, d3LLM), failing to realize the inherent “diffusion advantage.”
Method: We propose the first online reinforcement learning framework tailored for MDLMs, built upon Group Relative Policy Optimization (GRPO). It introduces a learnable, independent Bernoulli unmasking scheduler that enables dynamic, adaptive parallel mask removal. A multi-objective reward function—comprising verifiable task rewards, distillation consistency rewards, and step penalties—is designed to jointly optimize generation quality and decoding efficiency in an end-to-end manner.
Contribution/Results: On mathematical reasoning and code generation benchmarks, our method significantly improves the accuracy–latency trade-off and substantially increases tokens decoded per step, surpassing the performance ceiling of offline distillation approaches.
📝 Abstract
Masked diffusion language models (MDLMs) offer the potential for parallel token generation, but most open-source MDLMs decode fewer than 5 tokens per model forward pass even with sophisticated sampling strategies. As a result, their sampling speeds are often comparable to AR + speculative decoding schemes, limiting their advantage over mainstream autoregressive approaches. Existing distillation-based accelerators (dParallel, d3LLM) finetune MDLMs on trajectories generated by a base model, which can become off-policy during finetuning and restrict performance to the quality of the base model's samples. We propose exttt{dUltra}, an on-policy reinforcement learning framework based on Group Relative Policy Optimization (GRPO) that learns unmasking strategies for efficient parallel decoding. dUltra introduces an unmasking planner head that predicts per-token unmasking likelihoods under independent Bernoulli distributions. We jointly optimize the base diffusion LLM and the unmasking order planner using reward signals combining verifiable reward, distillation reward, and the number of unmasking steps. Across mathematical reasoning and code generation tasks, dUltra improves the accuracy--efficiency trade-off over state-of-the-art heuristic and distillation baselines, moving towards achieving ``diffusion supremacy'' over autoregressive models.