LLaDA 1.5: Variance-Reduced Preference Optimization for Large Language Diffusion Models

📅 2025-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language diffusion models (e.g., LLaDA) suffer from training instability during human preference alignment due to excessively high variance in ELBO gradient estimates. Method: This paper proposes Variance-Reduced Preference Optimization (VRPO), the first framework to systematically model variance sources in preference optimization gradients for masked diffusion models. It theoretically analyzes the bias–variance trade-off in ELBO estimation and introduces unbiased variance reduction techniques: optimal Monte Carlo budget allocation and antithetic sampling. Results: Evaluated on LLaDA 1.5, VRPO achieves substantial gains across domains—mathematics (GSM8K +4.7), code generation (HumanEval +3.0), and alignment (IFEval +4.0, Arena-Hard +4.3)—surpassing supervised fine-tuning baselines and matching the performance of strong autoregressive and diffusion models.

Technology Category

Application Category

📝 Abstract
While Masked Diffusion Models (MDMs), such as LLaDA, present a promising paradigm for language modeling, there has been relatively little effort in aligning these models with human preferences via reinforcement learning. The challenge primarily arises from the high variance in Evidence Lower Bound (ELBO)-based likelihood estimates required for preference optimization. To address this issue, we propose Variance-Reduced Preference Optimization (VRPO), a framework that formally analyzes the variance of ELBO estimators and derives bounds on both the bias and variance of preference optimization gradients. Building on this theoretical foundation, we introduce unbiased variance reduction strategies, including optimal Monte Carlo budget allocation and antithetic sampling, that significantly improve the performance of MDM alignment. We demonstrate the effectiveness of VRPO by applying it to LLaDA, and the resulting model, LLaDA 1.5, outperforms its SFT-only predecessor consistently and significantly across mathematical (GSM8K +4.7), code (HumanEval +3.0, MBPP +1.8), and alignment benchmarks (IFEval +4.0, Arena-Hard +4.3). Furthermore, LLaDA 1.5 demonstrates a highly competitive mathematical performance compared to strong language MDMs and ARMs. Project page: https://ml-gsai.github.io/LLaDA-1.5-Demo/.
Problem

Research questions and friction points this paper is trying to address.

Aligning masked diffusion models with human preferences via reinforcement learning
Reducing high variance in ELBO-based likelihood estimates for optimization
Improving performance of preference optimization in language diffusion models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Variance-Reduced Preference Optimization (VRPO) framework
Unbiased variance reduction strategies
Optimal Monte Carlo budget allocation
🔎 Similar Papers
No similar papers found.