🤖 AI Summary
This work addresses the weak reasoning capabilities of diffusion-based large language models (dLLMs). To enhance their mathematical and logical reasoning performance, we propose the d1 framework. Methodologically, we introduce diffu-GRPO—a novel critic-free policy gradient reinforcement learning algorithm tailored for the diffusion paradigm—combined with masked supervised fine-tuning (Masked SFT) to directly distill self-corrective reasoning behaviors from existing reasoning datasets. This synergistic optimization overcomes key bottlenecks in dLLMs: modeling long-range dependencies and sequential stepwise deduction under non-autoregressive generation. Experiments demonstrate that d1 significantly outperforms prior dLLMs across multiple mathematical and logical reasoning benchmarks. Notably, it achieves reasoning performance comparable to state-of-the-art autoregressive LLMs for the first time under a diffusion architecture, establishing a new SOTA in dLLM reasoning capability.
📝 Abstract
Recent large language models (LLMs) have demonstrated strong reasoning capabilities that benefits from online reinforcement learning (RL). These capabilities have primarily been demonstrated within the left-to-right autoregressive (AR) generation paradigm. In contrast, non-autoregressive paradigms based on diffusion generate text in a coarse-to-fine manner. Although recent diffusion-based large language models (dLLMs) have achieved competitive language modeling performance compared to their AR counterparts, it remains unclear if dLLMs can also leverage recent advances in LLM reasoning. To this end, we propose d1, a framework to adapt pre-trained masked dLLMs into reasoning models via a combination of supervised finetuning (SFT) and RL. Specifically, we develop and extend techniques to improve reasoning in pretrained dLLMs: (a) we utilize a masked SFT technique to distill knowledge and instill self-improvement behavior directly from existing datasets, and (b) we introduce a novel critic-free, policy-gradient based RL algorithm called diffu-GRPO. Through empirical studies, we investigate the performance of different post-training recipes on multiple mathematical and logical reasoning benchmarks. We find that d1 yields the best performance and significantly improves performance of a state-of-the-art dLLM.