Reinforcement Learning on Pre-Training Data

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the growing tension between the scarcity of high-quality annotated data and escalating computational demands in large language model (LLM) pre-training, this paper proposes RLPT (Reinforcement Learning-based Pre-Training), a novel pre-training paradigm that eliminates reliance on human-provided reward signals. RLPT is the first approach to integrate reinforcement learning directly into the pre-training stage, using self-supervised “next-paragraph prediction” as an intrinsic reward to guide the model in autonomously discovering long-range reasoning paths within raw pre-training corpora. By eschewing external annotation, RLPT enables efficient, scalable, and fully unsupervised optimization. Experiments on Qwen3-4B-Base demonstrate that RLPT substantially enhances reasoning and generalization capabilities, yielding up to +8.1 points on benchmarks including MMLU and AIME24—validating its effectiveness and broad applicability across diverse evaluation tasks.

Technology Category

Application Category

📝 Abstract
The growing disparity between the exponential scaling of computational resources and the finite growth of high-quality text data now constrains conventional scaling approaches for large language models (LLMs). To address this challenge, we introduce Reinforcement Learning on Pre-Training data (RLPT), a new training-time scaling paradigm for optimizing LLMs. In contrast to prior approaches that scale training primarily through supervised learning, RLPT enables the policy to autonomously explore meaningful trajectories to learn from pre-training data and improve its capability through reinforcement learning (RL). While existing RL strategies such as reinforcement learning from human feedback (RLHF) and reinforcement learning with verifiable rewards (RLVR) rely on human annotation for reward construction, RLPT eliminates this dependency by deriving reward signals directly from pre-training data. Specifically, it adopts a next-segment reasoning objective, rewarding the policy for accurately predicting subsequent text segments conditioned on the preceding context. This formulation allows RL to be scaled on pre-training data, encouraging the exploration of richer trajectories across broader contexts and thereby fostering more generalizable reasoning skills. Extensive experiments on both general-domain and mathematical reasoning benchmarks across multiple models validate the effectiveness of RLPT. For example, when applied to Qwen3-4B-Base, RLPT yields absolute improvements of $3.0$, $5.1$, $8.1$, $6.0$, $6.6$, and $5.3$ on MMLU, MMLU-Pro, GPQA-Diamond, KOR-Bench, AIME24, and AIME25, respectively. The results further demonstrate favorable scaling behavior, suggesting strong potential for continued gains with more compute. In addition, RLPT provides a solid foundation, extending the reasoning boundaries of LLMs and enhancing RLVR performance.
Problem

Research questions and friction points this paper is trying to address.

Addressing limited high-quality text data for scaling large language models
Eliminating dependency on human annotation for reinforcement learning rewards
Enhancing reasoning skills through autonomous exploration of pre-training data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses reinforcement learning on pre-training data
Derives reward signals from next-segment prediction
Enables autonomous exploration of text trajectories
🔎 Similar Papers
No similar papers found.
S
Siheng Li
LLM Department, Tencent
K
Kejiao Li
LLM Department, Tencent
Zenan Xu
Zenan Xu
Sun Yat-sen University
G
Guanhua Huang
LLM Department, Tencent
E
Evander Yang
LLM Department, Tencent
K
Kun Li
LLM Department, Tencent
Haoyuan Wu
Haoyuan Wu
The Chinese University of Hong Kong
Generative AILarge Language ModelsMultimodal ModelsAgentic AIRepresentation Learning
Jiajia Wu
Jiajia Wu
University of California
Neural Interface IC DesignBioinstrumentationImage Sensor
Z
Zihao Zheng
LLM Department, Tencent
C
Chenchen Zhang
LLM Department, Tencent
Kun Shi
Kun Shi
LLM Department, Tencent
K
Kyrierl Deng
LLM Department, Tencent
Q
Qi Yi
LLM Department, Tencent
Ruibin Xiong
Ruibin Xiong
Tencent, Hunyuan
Large Language ModelMachine Learning
Tingqiang Xu
Tingqiang Xu
Tsinghua University
Theoretical Computer ScienceMachine Learning
Yuhao Jiang
Yuhao Jiang
Postdoc Researcher, EPFL
Soft RoboticsMechanism DesignDynamic ModelingControls
J
Jianfeng Yan
LLM Department, Tencent
Y
Yuyuan Zeng
LLM Department, Tencent
Guanghui Xu
Guanghui Xu
LLM Department, Tencent
J
Jinbao Xue
HunYuan Infra Team
Z
Zhijiang Xu
HunYuan Infra Team
Z
Zheng Fang
HunYuan Infra Team
S
Shuai Li
HunYuan Infra Team
Q
Qibin Liu
HunYuan Infra Team
X
Xiaoxue Li
HunYuan Infra Team
Z
Zhuoyu Li
HunYuan Infra Team
Y
Yangyu Tao
HunYuan Infra Team
F
Fei Gao
HunYuan Infra Team
Cheng Jiang
Cheng Jiang
Postdoc at Institut national de la recherche scientifique (INRS)
Structured illumination3D measurement3D imagingSingle-pixel imaging
B
Bo Chao Wang
HunYuan Infra Team
K
Kai Liu
HunYuan Infra Team
J
Jianchen Zhu
HunYuan Infra Team
Wai Lam
Wai Lam
The Chinese University of Hong Kong
Text Mining and Machine LearningIntelligent Information Retrieval
W
Wayyt Wang
LLM Department, Tencent
B
Bo Zhou
LLM Department, Tencent
D
Di Wang
LLM Department, Tencent