dUltra: Ultra-Fast Diffusion Language Models via Reinforcement Learning

📅 2025-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing masked diffusion language models (MDLMs) suffer from inefficient parallel decoding, as their sampling speed is constrained by heuristic or offline distillation strategies (e.g., dParallel, d3LLM), failing to realize the inherent “diffusion advantage.” Method: We propose the first online reinforcement learning framework tailored for MDLMs, built upon Group Relative Policy Optimization (GRPO). It introduces a learnable, independent Bernoulli unmasking scheduler that enables dynamic, adaptive parallel mask removal. A multi-objective reward function—comprising verifiable task rewards, distillation consistency rewards, and step penalties—is designed to jointly optimize generation quality and decoding efficiency in an end-to-end manner. Contribution/Results: On mathematical reasoning and code generation benchmarks, our method significantly improves the accuracy–latency trade-off and substantially increases tokens decoded per step, surpassing the performance ceiling of offline distillation approaches.

Technology Category

Application Category

📝 Abstract
Masked diffusion language models (MDLMs) offer the potential for parallel token generation, but most open-source MDLMs decode fewer than 5 tokens per model forward pass even with sophisticated sampling strategies. As a result, their sampling speeds are often comparable to AR + speculative decoding schemes, limiting their advantage over mainstream autoregressive approaches. Existing distillation-based accelerators (dParallel, d3LLM) finetune MDLMs on trajectories generated by a base model, which can become off-policy during finetuning and restrict performance to the quality of the base model's samples. We propose exttt{dUltra}, an on-policy reinforcement learning framework based on Group Relative Policy Optimization (GRPO) that learns unmasking strategies for efficient parallel decoding. dUltra introduces an unmasking planner head that predicts per-token unmasking likelihoods under independent Bernoulli distributions. We jointly optimize the base diffusion LLM and the unmasking order planner using reward signals combining verifiable reward, distillation reward, and the number of unmasking steps. Across mathematical reasoning and code generation tasks, dUltra improves the accuracy--efficiency trade-off over state-of-the-art heuristic and distillation baselines, moving towards achieving ``diffusion supremacy'' over autoregressive models.
Problem

Research questions and friction points this paper is trying to address.

Accelerates masked diffusion language models for faster parallel token generation
Addresses off-policy limitations in distillation-based acceleration methods
Optimizes unmasking strategies to improve accuracy-efficiency trade-off
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning optimizes unmasking strategies for diffusion models
Unmasking planner predicts token likelihoods under Bernoulli distributions
Joint optimization uses verifiable, distillation rewards and step count