Large Language Model Post-Training: A Unified View of Off-Policy and On-Policy Learning

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current post-training approaches for large language models are fragmented and lack a unified perspective to understand how they overcome behavioral bottlenecks. This work proposes a structured framework that conceptualizes post-training as an intervention on model behavior, categorizing methods based on the source of behavioral trajectories into off-policy and on-policy learning. It systematically integrates techniques such as supervised fine-tuning, preference optimization, reinforcement learning, process supervision, verifier-guided training, knowledge distillation, and multi-stage pipelines through three functional roles: support expansion, policy reshaping, and behavior consolidation. The framework elucidates the essential functions and synergistic mechanisms of these methods, providing a theoretical foundation for diagnosing training bottlenecks and designing multi-stage collaborative systems, while emphasizing that system-level coordination yields greater benefits than optimization toward a single objective.
📝 Abstract
Post-training has become central to turning pretrained large language models (LLMs) into aligned and deployable systems. Recent progress spans supervised fine-tuning (SFT), preference optimization, reinforcement learning (RL), process supervision, verifier-guided methods, distillation, and multi-stage pipelines. Yet these methods are often discussed in fragmented ways, organized by labels or objective families rather than by the behavioral bottlenecks they address. This survey argues that LLM post-training is best understood as structured intervention on model behavior. We organize the field first by trajectory provenance, which defines two primary learning regimes: off-policy learning on externally supplied trajectories, and on-policy learning on learner-generated rollouts. We then interpret methods through two recurring roles -- effective support expansion, which makes useful behaviors more reachable, and policy reshaping, which improves behavior within already reachable regions -- together with a complementary systems-level role, behavioral consolidation, which preserves, transfers, and amortizes behavior across stages and model transitions. This perspective yields a unified reading of major paradigms. SFT may serve either support expansion or policy reshaping, whereas preference-based methods are usually off-policy reshaping. On-policy RL often improves behavior on learner-generated states, though under stronger guidance it can also make hard-to-reach reasoning paths reachable. Distillation is often best understood as consolidation rather than only compression, and hybrid pipelines emerge as coordinated multi-stage compositions. Overall, the framework helps diagnose post-training bottlenecks and reason about stage composition, suggesting that progress in LLM post-training increasingly depends on coordinated system design rather than any single dominant objective.
Problem

Research questions and friction points this paper is trying to address.

post-training
large language models
off-policy learning
on-policy learning
behavioral bottlenecks
Innovation

Methods, ideas, or system contributions that make the work stand out.

post-training
off-policy learning
on-policy learning
behavioral intervention
policy reshaping
🔎 Similar Papers
No similar papers found.