Towards a Unified View of Large Language Model Post-Training

📅 2025-09-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the fragmentation between online (model-generated) and offline (human- or model-demonstrated) data, as well as the misalignment of optimization objectives between reinforcement learning (RL) and supervised fine-tuning (SFT), in large language model post-training. To unify these paradigms, we propose the Hierarchical Policy Tuning (HPT) framework—a unified policy gradient approach. Its core innovation is a decomposable, general-purpose gradient estimator comprising four components: stabilization masking, reference-policy denominator normalization, advantage estimation, and likelihood-gradient computation—jointly modeling heterogeneous data as a dynamically weighted optimization process under a single objective. We theoretically prove that both RL and SFT are special cases of HPT. Empirically, HPT achieves significant improvements over strong baselines on six mathematical reasoning benchmarks and two out-of-distribution tasks, demonstrating consistent robustness across model scales and architectures.

Technology Category

Application Category

📝 Abstract
Two major sources of training data exist for post-training modern language models: online (model-generated rollouts) data, and offline (human or other-model demonstrations) data. These two types of data are typically used by approaches like Reinforcement Learning (RL) and Supervised Fine-Tuning (SFT), respectively. In this paper, we show that these approaches are not in contradiction, but are instances of a single optimization process. We derive a Unified Policy Gradient Estimator, and present the calculations of a wide spectrum of post-training approaches as the gradient of a common objective under different data distribution assumptions and various bias-variance tradeoffs. The gradient estimator is constructed with four interchangeable parts: stabilization mask, reference policy denominator, advantage estimate, and likelihood gradient. Motivated by our theoretical findings, we propose Hybrid Post-Training (HPT), an algorithm that dynamically selects different training signals. HPT is designed to yield both effective exploitation of demonstration and stable exploration without sacrificing learned reasoning patterns. We provide extensive experiments and ablation studies to verify the effectiveness of our unified theoretical framework and HPT. Across six mathematical reasoning benchmarks and two out-of-distribution suites, HPT consistently surpasses strong baselines across models of varying scales and families.
Problem

Research questions and friction points this paper is trying to address.

Unifying online and offline data approaches in post-training
Proposing a unified policy gradient estimator for optimization
Developing Hybrid Post-Training to enhance reasoning performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified Policy Gradient Estimator for optimization
Hybrid Post-Training algorithm dynamically selects signals
Interchangeable gradient parts with stabilization components
🔎 Similar Papers
No similar papers found.