🤖 AI Summary
To address the growing tension between the scarcity of high-quality annotated data and escalating computational demands in large language model (LLM) pre-training, this paper proposes RLPT (Reinforcement Learning-based Pre-Training), a novel pre-training paradigm that eliminates reliance on human-provided reward signals. RLPT is the first approach to integrate reinforcement learning directly into the pre-training stage, using self-supervised “next-paragraph prediction” as an intrinsic reward to guide the model in autonomously discovering long-range reasoning paths within raw pre-training corpora. By eschewing external annotation, RLPT enables efficient, scalable, and fully unsupervised optimization. Experiments on Qwen3-4B-Base demonstrate that RLPT substantially enhances reasoning and generalization capabilities, yielding up to +8.1 points on benchmarks including MMLU and AIME24—validating its effectiveness and broad applicability across diverse evaluation tasks.
📝 Abstract
The growing disparity between the exponential scaling of computational resources and the finite growth of high-quality text data now constrains conventional scaling approaches for large language models (LLMs). To address this challenge, we introduce Reinforcement Learning on Pre-Training data (RLPT), a new training-time scaling paradigm for optimizing LLMs. In contrast to prior approaches that scale training primarily through supervised learning, RLPT enables the policy to autonomously explore meaningful trajectories to learn from pre-training data and improve its capability through reinforcement learning (RL). While existing RL strategies such as reinforcement learning from human feedback (RLHF) and reinforcement learning with verifiable rewards (RLVR) rely on human annotation for reward construction, RLPT eliminates this dependency by deriving reward signals directly from pre-training data. Specifically, it adopts a next-segment reasoning objective, rewarding the policy for accurately predicting subsequent text segments conditioned on the preceding context. This formulation allows RL to be scaled on pre-training data, encouraging the exploration of richer trajectories across broader contexts and thereby fostering more generalizable reasoning skills. Extensive experiments on both general-domain and mathematical reasoning benchmarks across multiple models validate the effectiveness of RLPT. For example, when applied to Qwen3-4B-Base, RLPT yields absolute improvements of $3.0$, $5.1$, $8.1$, $6.0$, $6.6$, and $5.3$ on MMLU, MMLU-Pro, GPQA-Diamond, KOR-Bench, AIME24, and AIME25, respectively. The results further demonstrate favorable scaling behavior, suggesting strong potential for continued gains with more compute. In addition, RLPT provides a solid foundation, extending the reasoning boundaries of LLMs and enhancing RLVR performance.