🤖 AI Summary
To address the low post-training efficiency and training-inference objective misalignment of diffusion language models (dLLMs), which severely hinder their complex reasoning capabilities (e.g., mathematical reasoning), this paper proposes DiRL—a holistic, end-to-end efficient post-training framework. Methodologically, DiRL introduces (1) DiPO, the first unbiased groupwise relative policy optimization algorithm tailored for dLLMs, effectively mitigating policy bias; and (2) a tightly integrated training-inference co-design, combining FlexAttention-accelerated block-level training with LMDeploy-optimized inference deployment. Evaluated on DiRL-8B-Instruct, the framework achieves state-of-the-art performance among dLLMs on multiple mathematical reasoning benchmarks, surpassing comparably sized Qwen2.5-series models.
📝 Abstract
Diffusion Language Models (dLLMs) have emerged as promising alternatives to Auto-Regressive (AR) models. While recent efforts have validated their pre-training potential and accelerated inference speeds, the post-training landscape for dLLMs remains underdeveloped. Existing methods suffer from computational inefficiency and objective mismatches between training and inference, severely limiting performance on complex reasoning tasks such as mathematics. To address this, we introduce DiRL, an efficient post-training framework that tightly integrates FlexAttention-accelerated blockwise training with LMDeploy-optimized inference. This architecture enables a streamlined online model update loop, facilitating efficient two-stage post-training (Supervised Fine-Tuning followed by Reinforcement Learning). Building on this framework, we propose DiPO, the first unbiased Group Relative Policy Optimization (GRPO) implementation tailored for dLLMs. We validate our approach by training DiRL-8B-Instruct on high-quality math data. Our model achieves state-of-the-art math performance among dLLMs and surpasses comparable models in the Qwen2.5 series on several benchmarks.