Improving Reasoning for Diffusion Language Models via Group Diffusion Policy Optimization

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion language models (DLMs) face critical challenges in reinforcement learning fine-tuning, including non-differentiable likelihoods, high variance in ELBO estimation, and low sampling efficiency. To address these, we propose Group Diffusion Policy Optimization (GDPO), which decomposes the sources of ELBO variance and introduces a semi-deterministic Monte Carlo estimator that drastically reduces variance under extremely low sampling budgets. GDPO further integrates deterministic integral approximation with a two-level Monte Carlo sampling scheme to enable efficient and stable sequence-level likelihood optimization. Experiments demonstrate that GDPO consistently outperforms both pretrained DLMs and state-of-the-art methods such as diffu-GRPO across multiple benchmarks—including mathematical reasoning and code generation—marking the first successful realization of efficient policy optimization for DLMs within the RLHF paradigm.

Technology Category

Application Category

📝 Abstract
Diffusion language models (DLMs) enable parallel, order-agnostic generation with iterative refinement, offering a flexible alternative to autoregressive large language models (LLMs). However, adapting reinforcement learning (RL) fine-tuning to DLMs remains an open challenge because of the intractable likelihood. Pioneering work such as diffu-GRPO estimated token-level likelihoods via one-step unmasking. While computationally efficient, this approach is severely biased. A more principled foundation lies in sequence-level likelihoods, where the evidence lower bound (ELBO) serves as a surrogate. Yet, despite this clean mathematical connection, ELBO-based methods have seen limited adoption due to the prohibitive cost of likelihood evaluation. In this work, we revisit ELBO estimation and disentangle its sources of variance. This decomposition motivates reducing variance through fast, deterministic integral approximations along a few pivotal dimensions. Building on this insight, we introduce extbf{Group Diffusion Policy Optimization (GDPO)}, a new RL algorithm tailored for DLMs. GDPO leverages simple yet effective Semi-deterministic Monte Carlo schemes to mitigate the variance explosion of ELBO estimators under vanilla double Monte Carlo sampling, yielding a provably lower-variance estimator under tight evaluation budgets. Empirically, GDPO achieves consistent gains over pretrained checkpoints and outperforms diffu-GRPO, one of the state-of-the-art baselines, on the majority of math, reasoning, and coding benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Optimizing reinforcement learning fine-tuning for diffusion language models
Addressing variance issues in ELBO estimation for DLMs
Improving mathematical reasoning and coding performance of DLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Group Diffusion Policy Optimization for RL fine-tuning
Semi-deterministic Monte Carlo for variance reduction
ELBO estimation via deterministic integral approximations
🔎 Similar Papers
No similar papers found.
K
Kevin Rojas
Georgia Institute of Technology
J
Jiahe Lin
ML Research, Morgan Stanley
K
Kashif Rasul
ML Research, Morgan Stanley
Anderson Schneider
Anderson Schneider
Morgan Stanley
Machine Learning
Y
Yuriy Nevmyvaka
ML Research, Morgan Stanley
Molei Tao
Molei Tao
Associate Professor, Georgia Institute of Technology
foundation of machine learningapplied & computational mathstochastic/nonlinear dynamics
W
Wei Deng
ML Research, Morgan Stanley