🤖 AI Summary
Large language diffusion models (e.g., LLaDA) suffer from training instability during human preference alignment due to excessively high variance in ELBO gradient estimates.
Method: This paper proposes Variance-Reduced Preference Optimization (VRPO), the first framework to systematically model variance sources in preference optimization gradients for masked diffusion models. It theoretically analyzes the bias–variance trade-off in ELBO estimation and introduces unbiased variance reduction techniques: optimal Monte Carlo budget allocation and antithetic sampling.
Results: Evaluated on LLaDA 1.5, VRPO achieves substantial gains across domains—mathematics (GSM8K +4.7), code generation (HumanEval +3.0), and alignment (IFEval +4.0, Arena-Hard +4.3)—surpassing supervised fine-tuning baselines and matching the performance of strong autoregressive and diffusion models.
📝 Abstract
While Masked Diffusion Models (MDMs), such as LLaDA, present a promising paradigm for language modeling, there has been relatively little effort in aligning these models with human preferences via reinforcement learning. The challenge primarily arises from the high variance in Evidence Lower Bound (ELBO)-based likelihood estimates required for preference optimization. To address this issue, we propose Variance-Reduced Preference Optimization (VRPO), a framework that formally analyzes the variance of ELBO estimators and derives bounds on both the bias and variance of preference optimization gradients. Building on this theoretical foundation, we introduce unbiased variance reduction strategies, including optimal Monte Carlo budget allocation and antithetic sampling, that significantly improve the performance of MDM alignment. We demonstrate the effectiveness of VRPO by applying it to LLaDA, and the resulting model, LLaDA 1.5, outperforms its SFT-only predecessor consistently and significantly across mathematical (GSM8K +4.7), code (HumanEval +3.0, MBPP +1.8), and alignment benchmarks (IFEval +4.0, Arena-Hard +4.3). Furthermore, LLaDA 1.5 demonstrates a highly competitive mathematical performance compared to strong language MDMs and ARMs. Project page: https://ml-gsai.github.io/LLaDA-1.5-Demo/.