Beyond KL Divergence: Policy Optimization with Flexible Bregman Divergences for LLM Reasoning

📅 2026-02-04
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical limitation in existing population-based policy optimization methods for large language model (LLM) reasoning, which rely solely on KL divergence for regularization while overlooking the impact of divergence choice on performance. To remedy this, we propose Group-Based Mirror Policy Optimization (GBMPO), a novel framework that introduces general Bregman divergences into population-based policy optimization for the first time. GBMPO supports both handcrafted L2 divergences in probability space and learnable neural mirror maps. By integrating population-based relative policy optimization with evolutionary-strategy meta-learning, our method achieves 86.7% accuracy on GSM8K (+5.5%) and 60.1–60.8% pass@1 on MBPP. Notably, even randomly initialized neural mirror maps yield substantial performance gains while reducing response length and training variance, demonstrating that divergence selection constitutes a key new dimension for enhancing LLM reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Policy optimization methods like Group Relative Policy Optimization (GRPO) and its variants have achieved strong results on mathematical reasoning and code generation tasks. Despite extensive exploration of reward processing strategies and training dynamics, all existing group-based methods exclusively use KL divergence for policy regularization, leaving the choice of divergence function unexplored. We introduce Group-Based Mirror Policy Optimization (GBMPO), a framework that extends group-based policy optimization to flexible Bregman divergences, including hand-designed alternatives (L2 in probability space) and learned neural mirror maps. On GSM8K mathematical reasoning, hand-designed ProbL2-GRPO achieves 86.7% accuracy, improving +5.5 points over the Dr. GRPO baseline. On MBPP code generation, neural mirror maps reach 60.1-60.8% pass@1, with random initialization already capturing most of the benefit. While evolutionary strategies meta-learning provides marginal accuracy improvements, its primary value lies in variance reduction ($\pm$0.2 versus $\pm$0.6) and efficiency gains (15% shorter responses on MBPP), suggesting that random initialization of neural mirror maps is sufficient for most practical applications. These results establish divergence choice as a critical, previously unexplored design dimension in group-based policy optimization for LLM reasoning.
Problem

Research questions and friction points this paper is trying to address.

policy optimization
Bregman divergences
LLM reasoning
KL divergence
group-based methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bregman divergence
policy optimization
mirror maps
LLM reasoning
group-based learning