🤖 AI Summary
Diffusion language models in reinforcement learning often suffer from low training efficiency and unstable policy updates. This work proposes the Spatio-Temporal Pruning (STP) framework, which, for the first time, integrates spatial pruning—constraining the exploration space using static priors—and temporal pruning—skipping redundant denoising steps in later stages—into the reinforcement learning process of diffusion language models. Theoretical analysis demonstrates that STP rigorously reduces the variance of log-likelihood estimation, thereby enhancing the stability of policy updates. Experimental results confirm that STP outperforms state-of-the-art baselines in both training efficiency and task accuracy.
📝 Abstract
Reinforcement Learning (RL) is crucial for unlocking the complex reasoning capabilities of Diffusion-based Large Language Models (dLLMs). However, applying RL to dLLMs faces unique challenges in efficiency and stability. To address these challenges, we propose Spatio-Temporal Pruning (STP), a framework designed to simultaneously improve the efficiency and stability of RL for dLLMs. STP compresses the redundancy in the generative process through: (1) \textit{spatial pruning}, which constrains the exploration space using static priors; and (2) \textit{temporal pruning}, which bypasses redundant late-stage refinement steps. Our theoretical analysis demonstrates that STP strictly reduces the variance of the log-likelihood estimation, thereby ensuring more stable policy updates. Extensive experiments demonstrate that STP surpasses state-of-the-art baselines in both efficiency and accuracy. Our code is available at https://github.com/Lolo1222/STP.