Reinforcement Learning for Diffusion LLMs with Entropy-Guided Step Selection and Stepwise Advantages

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing reinforcement learning approaches struggle to directly optimize diffusion language models due to the intractability of sequence likelihood, often resorting to biased approximations that disregard the temporal structure of the denoising process. This work formulates diffusion-based generation as a finite-horizon Markov decision process and introduces, for the first time, an unbiased policy gradient method tailored to this framework. By recursively decomposing the advantage function, incorporating an entropy-guided mechanism for selecting denoising steps, and estimating intermediate advantages from single-step denoising rewards—thereby avoiding multi-step rollouts—the method effectively preserves trajectory-level temporal information. Empirically, it achieves state-of-the-art performance on code generation and logical reasoning benchmarks and significantly outperforms existing reinforcement learning post-training approaches for diffusion language models on mathematical reasoning tasks.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has been effective for post-training autoregressive (AR) language models, but extending these methods to diffusion language models (DLMs) is challenging due to intractable sequence-level likelihoods. Existing approaches therefore rely on surrogate likelihoods or heuristic approximations, which can introduce bias and obscure the sequential structure of denoising. We formulate diffusion-based sequence generation as a finite-horizon Markov decision process over the denoising trajectory and derive an exact, unbiased policy gradient that decomposes over denoising steps and is expressed in terms of intermediate advantages, without requiring explicit evaluation of the sequence likelihood. To obtain a practical and compute-efficient estimator, we (i) select denoising steps for policy updates via an entropy-guided approximation bound, and (ii) estimate intermediate advantages using a one-step denoising reward naturally provided by the diffusion model, avoiding costly multi-step rollouts. Experiments on coding and logical reasoning benchmarks demonstrate state-of-the-art results, with strong competitive performance on mathematical reasoning, outperforming existing RL post-training approaches for DLMs. Code is available at https://github.com/vishnutez/egspo-dllm-rl.
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning
Diffusion Language Models
Sequence-level Likelihood
Policy Gradient
Denoising Trajectory
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion Language Models
Reinforcement Learning
Entropy-Guided Step Selection
Stepwise Advantages
Policy Gradient
V
Vishnu Teja Kunde
Department of Electrical and Computer Engineering, Texas A&M University, College Station, Texas
F
Fatemeh Doudi
Department of Electrical and Computer Engineering, Texas A&M University, College Station, Texas
M
Mahdi Farahbakhsh
Department of Electrical and Computer Engineering, Texas A&M University, College Station, Texas
Dileep Kalathil
Dileep Kalathil
Texas A&M University
Reinforcement LearningMachine LearningStochastic Control
Krishna Narayanan
Krishna Narayanan
Sanchez Chair Professor in ECEN, Texas A&M University
Coding theoryInformation theoryArtificial IntelligenceWireless Networks
Jean-Francois Chamberland
Jean-Francois Chamberland
Professor, Texas A&M University
Communication and Information TheoryDecision and ControlComputer Systems and NetworksStatistical InferenceLearning