🤖 AI Summary
This work addresses the high computational cost of trajectory probability estimation in offline policy optimization for diffusion-based large language models (dLLMs). To overcome this challenge, the authors propose two efficient trajectory compression strategies: one based on reference-policy-regularized estimation of unmasked token probability ratios, and another leveraging a single forward pass with remasking at the terminal state to enable the first unbiased, single-pass estimation of intermediate diffusion-state probability ratios. This reduces trajectory probability computation to a terminal-state remasking operation. Building upon these innovations, the paper introduces the dTRPO optimization framework, which integrates reference policy regularization, remasking mechanisms, and policy gradient methods. Experiments on 7B-scale dLLMs demonstrate performance improvements of 9.6%, 4.3%, and 3.0% on STEM, programming, and instruction-following tasks, respectively, significantly enhancing both training and generation efficiency.
📝 Abstract
Diffusion Large Language Models (dLLMs) introduce a new paradigm for language generation, which in turn presents new challenges for aligning them with human preferences. In this work, we aim to improve the policy optimization for dLLMs by reducing the cost of the trajectory probability calculation, thereby enabling scaled-up offline policy training. We prove that: (i) under reference policy regularization, the probability ratio of the newly unmasked tokens is an unbiased estimate of that of intermediate diffusion states, and (ii) the probability of the full trajectory can be effectively estimated with a single forward pass of a re-masked final state. By integrating these two trajectory reduction strategies into a policy optimization objective, we propose Trajectory Reduction Policy Optimization (dTRPO). We evaluate dTRPO on 7B dLLMs across instruction-following and reasoning benchmarks. Results show that it substantially improves the core performance of state-of-the-art dLLMs, achieving gains of up to 9.6% on STEM tasks, up to 4.3% on coding tasks, and up to 3.0% on instruction-following tasks. Moreover, dTRPO exhibits strong training efficiency due to its offline, single-forward nature, and achieves improved generation efficiency through high-quality outputs.