SEED-GRPO: Semantic Entropy Enhanced GRPO for Uncertainty-Aware Policy Optimization

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit substantial variation in response confidence across prompts, reflecting inherent uncertainty in semantic understanding; however, conventional Group Relative Policy Optimization (GRPO) disregards this heterogeneity by applying uniform policy updates, thereby compromising generalization and training stability. To address this, we propose SEED-GRPO—the first GRPO variant to incorporate semantic entropy as a differentiable uncertainty metric. By modeling semantic diversity across multiple sampled responses, SEED-GRPO dynamically weights policy gradient updates, enabling adaptive optimization: high-confidence prompts trigger stronger updates, while low-confidence prompts induce more conservative, stable updates. The method is both theoretically grounded and practically implementable. Evaluated on five rigorous mathematical reasoning benchmarks—AIME24, AMC, MATH, Minerva, and OlympiadBench—SEED-GRPO achieves new state-of-the-art average accuracy, with particularly significant gains on the most challenging tasks.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) exhibit varying levels of confidence across input prompts (questions): some lead to consistent, semantically similar answers, while others yield diverse or contradictory outputs. This variation reflects LLM's uncertainty about the input prompt, a signal of how confidently the model understands a given problem. However, vanilla Group Relative Policy Optimization (GRPO) treats all prompts equally during policy updates, ignoring this important information about the model's knowledge boundaries. To address this limitation, we propose SEED-GRPO (Semantic Entropy EnhanceD GRPO), which explicitly measures LLMs' uncertainty of the input prompts semantic entropy. Semantic entropy measures the diversity of meaning in multiple generated answers given a prompt and uses this to modulate the magnitude of policy updates. This uncertainty-aware training mechanism enables dynamic adjustment of policy update magnitudes based on question uncertainty. It allows more conservative updates on high-uncertainty questions while maintaining the original learning signal on confident ones. Experimental results on five mathematical reasoning benchmarks (AIME24 56.7, AMC 68.7, MATH 83.4, Minerva 34.2, and OlympiadBench 48.0) demonstrate that SEED-GRPO achieves new state-of-the-art performance in average accuracy, validating the effectiveness of uncertainty-aware policy optimization.
Problem

Research questions and friction points this paper is trying to address.

Measures LLM uncertainty via semantic entropy diversity
Adjusts policy updates based on question uncertainty levels
Improves accuracy in mathematical reasoning benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic entropy measures answer diversity for uncertainty
Dynamic policy updates based on question uncertainty levels
Conservative updates for high-uncertainty questions enhance accuracy
🔎 Similar Papers
No similar papers found.