f-GRPO and Beyond: Divergence-Based Reinforcement Learning Algorithms for General LLM Alignment

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of effectively aligning large language models with desired objectives under general settings—such as those involving only environmental rewards (e.g., RLVR) or preference data. Building upon the variational representation of $f$-divergence, the authors propose a unified alignment framework that interprets both reward-driven and preference-based alignment as instances of divergence minimization. Within this framework, they introduce two novel algorithms: $f$-GRPO for online policy optimization and $f$-HAL for hybrid policy optimization. The approach enjoys theoretical guarantees and offers flexible adaptation to diverse alignment scenarios. Experimental results demonstrate consistent superiority over existing methods on both mathematical reasoning tasks (using RLVR) and safety alignment benchmarks (using PA), thereby validating the framework’s effectiveness and generality.

Technology Category

Application Category

📝 Abstract
Recent research shows that Preference Alignment (PA) objectives act as divergence estimators between aligned (chosen) and unaligned (rejected) response distributions. In this work, we extend this divergence-based perspective to general alignment settings, such as reinforcement learning with verifiable rewards (RLVR), where only environmental rewards are available. Within this unified framework, we propose f-Group Relative Policy Optimization (f-GRPO), a class of on-policy reinforcement learning, and f-Hybrid Alignment Loss (f-HAL), a hybrid on/off policy objectives, for general LLM alignment based on variational representation of f-divergences. We provide theoretical guarantees that these classes of objectives improve the average reward after alignment. Empirically, we validate our framework on both RLVR (Math Reasoning) and PA tasks (Safety Alignment), demonstrating superior performance and flexibility compared to current methods.
Problem

Research questions and friction points this paper is trying to address.

LLM alignment
Preference Alignment
Reinforcement Learning
f-divergence
Reward Modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

f-divergence
preference alignment
reinforcement learning
LLM alignment
variational representation
🔎 Similar Papers
No similar papers found.