🤖 AI Summary
This paper addresses the fragmentation between online (model-generated) and offline (human- or model-demonstrated) data, as well as the misalignment of optimization objectives between reinforcement learning (RL) and supervised fine-tuning (SFT), in large language model post-training. To unify these paradigms, we propose the Hierarchical Policy Tuning (HPT) framework—a unified policy gradient approach. Its core innovation is a decomposable, general-purpose gradient estimator comprising four components: stabilization masking, reference-policy denominator normalization, advantage estimation, and likelihood-gradient computation—jointly modeling heterogeneous data as a dynamically weighted optimization process under a single objective. We theoretically prove that both RL and SFT are special cases of HPT. Empirically, HPT achieves significant improvements over strong baselines on six mathematical reasoning benchmarks and two out-of-distribution tasks, demonstrating consistent robustness across model scales and architectures.
📝 Abstract
Two major sources of training data exist for post-training modern language models: online (model-generated rollouts) data, and offline (human or other-model demonstrations) data. These two types of data are typically used by approaches like Reinforcement Learning (RL) and Supervised Fine-Tuning (SFT), respectively. In this paper, we show that these approaches are not in contradiction, but are instances of a single optimization process. We derive a Unified Policy Gradient Estimator, and present the calculations of a wide spectrum of post-training approaches as the gradient of a common objective under different data distribution assumptions and various bias-variance tradeoffs. The gradient estimator is constructed with four interchangeable parts: stabilization mask, reference policy denominator, advantage estimate, and likelihood gradient. Motivated by our theoretical findings, we propose Hybrid Post-Training (HPT), an algorithm that dynamically selects different training signals. HPT is designed to yield both effective exploitation of demonstration and stable exploration without sacrificing learned reasoning patterns. We provide extensive experiments and ablation studies to verify the effectiveness of our unified theoretical framework and HPT. Across six mathematical reasoning benchmarks and two out-of-distribution suites, HPT consistently surpasses strong baselines across models of varying scales and families.