Stabilizing Reinforcement Learning for Diffusion Language Models

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability of directly applying Group Relative Policy Optimization (GRPO) to diffusion language models (dLLMs), where intractable sequence probabilities lead to noisy importance ratio estimates, triggering reward collapse and policy drift. To resolve this, we propose StableDRL—the first stable reinforcement learning framework tailored for dLLMs—which employs unconditional gradient clipping to suppress update spikes and introduces a self-normalized policy constraint to bound policy updates, thereby breaking the noise-variance feedback loop. Furthermore, StableDRL integrates importance ratio estimation based on either the ELBO or mean-field approximation with a stepwise attention mechanism, significantly enhancing training stability during RL fine-tuning. This approach effectively prevents reward collapse and provides a reliable solution for aligning diffusion language models.

Technology Category

Application Category

📝 Abstract
Group Relative Policy Optimization (GRPO) is highly effective for post-training autoregressive (AR) language models, yet its direct application to diffusion large language models (dLLMs) often triggers reward collapse. We identify two sources of incompatibility. First, GRPO relies on importance ratios defined by sequence probabilities, which are intractable in dLLMs and must be estimated (e.g., via ELBO-based or mean-field likelihood proxies), yielding inherently noisy ratios. Second, standard GRPO's formulation is not designed for estimated ratios: its conditional clipping can be anomalously bypassed by model-agnostic estimation noise, producing gradient spikes, while its fixed group-size normalization amplifies gradient-magnitude fluctuations under high-variance ratio estimates. We show these effects form a self-reinforcing instability loop that drives policy drift and further increases ratio variance. To break this loop, we propose StableDRL, a reformulation of GRPO tailored for dLLMs that uses (i) unconditional clipping to suppress outlier-induced spikes and (ii) self-normalization to constrain updates within the convex hull of per-sample gradients. We further extend StableDRL to block-wise diffusion models via a staircase attention mechanism.
Problem

Research questions and friction points this paper is trying to address.

diffusion language models
reinforcement learning
reward collapse
importance sampling
policy optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

StableDRL
diffusion language models
reinforcement learning stability
importance ratio estimation
self-normalization
🔎 Similar Papers
No similar papers found.