DiFFPO: Training Diffusion LLMs to Reason Fast and Furious via Reinforcement Learning

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the fundamental trade-off between inference quality and speed in masked diffusion large language models (dLLMs). To resolve this, we propose DiFFPO—a unified off-policy reinforcement learning framework. Its core innovations include a two-stage likelihood approximation and an importance-sampling correction mechanism, jointly optimizing both the diffusion model and an adaptive sampling controller to dynamically adjust inference thresholds for sample-efficient, Pareto-optimal quality–latency trade-offs. By integrating multi-token predictive modeling with surrogate policy training, DiFFPO significantly improves sampling efficiency. Experiments on mathematical reasoning and planning tasks demonstrate that DiFFPO achieves higher accuracy with fewer function calls, yielding superior latency–accuracy balance compared to baselines. Moreover, it is plug-and-play compatible with mainstream open-source LLMs without architectural modification.

Technology Category

Application Category

📝 Abstract
We propose DiFFPO, Diffusion Fast and Furious Policy Optimization, a unified framework for training masked diffusion large language models (dLLMs) to reason not only better (furious), but also faster via reinforcement learning (RL). We first unify the existing baseline approach such as d1 by proposing to train surrogate policies via off-policy RL, whose likelihood is much more tractable as an approximation to the true dLLM policy. This naturally motivates a more accurate and informative two-stage likelihood approximation combined with importance sampling correction, which leads to generalized RL algorithms with better sample efficiency and superior task performance. Second, we propose a new direction of joint training efficient samplers/controllers of dLLMs policy. Via RL, we incentivize dLLMs' natural multi-token prediction capabilities by letting the model learn to adaptively allocate an inference threshold for each prompt. By jointly training the sampler, we yield better accuracies with lower number of function evaluations (NFEs) compared to training the model only, obtaining the best performance in improving the Pareto frontier of the inference-time compute of dLLMs. We showcase the effectiveness of our pipeline by training open source large diffusion language models over benchmark math and planning tasks.
Problem

Research questions and friction points this paper is trying to address.

Training diffusion LLMs for faster reasoning via reinforcement learning
Improving sample efficiency and task performance through better likelihood approximation
Jointly training efficient samplers to reduce inference computations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training surrogate policies via off-policy reinforcement learning
Two-stage likelihood approximation with importance sampling correction
Joint training efficient samplers to reduce function evaluations
🔎 Similar Papers
No similar papers found.