Diversity or Precision? A Deep Dive into Next Token Prediction

📅 2025-12-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from limited exploration quality during reinforcement learning (RL) fine-tuning due to reliance on pretraining token distributions, which are suboptimal for downstream task-specific reasoning. Method: We propose a precision-prioritized supervised pretraining optimization framework. Its core innovations include: (i) introducing a single-step policy gradient perspective into supervised fine-tuning; (ii) designing a rank-aware asymmetric negative sampling mechanism; (iii) incorporating an explicit positive scaling factor to encode a precision-first prior; and (iv) establishing an entropy–accuracy trade-off analysis framework via joint entropy regularization and probability concentration control. Contribution/Results: Contrary to conventional high-entropy distribution assumptions, our precision-oriented prior significantly enhances exploration efficacy in RL fine-tuning. End-to-end inference performance—measured on mathematical reasoning and code generation tasks—improves by 12.3% on average.

Technology Category

Application Category

📝 Abstract
Recent advancements have shown that reinforcement learning (RL) can substantially improve the reasoning abilities of large language models (LLMs). The effectiveness of such RL training, however, depends critically on the exploration space defined by the pre-trained model's token-output distribution. In this paper, we revisit the standard cross-entropy loss, interpreting it as a specific instance of policy gradient optimization applied within a single-step episode. To systematically study how the pre-trained distribution shapes the exploration potential for subsequent RL, we propose a generalized pre-training objective that adapts on-policy RL principles to supervised learning. By framing next-token prediction as a stochastic decision process, we introduce a reward-shaping strategy that explicitly balances diversity and precision. Our method employs a positive reward scaling factor to control probability concentration on ground-truth tokens and a rank-aware mechanism that treats high-ranking and low-ranking negative tokens asymmetrically. This allows us to reshape the pre-trained token-output distribution and investigate how to provide a more favorable exploration space for RL, ultimately enhancing end-to-end reasoning performance. Contrary to the intuition that higher distribution entropy facilitates effective exploration, we find that imposing a precision-oriented prior yields a superior exploration space for RL.
Problem

Research questions and friction points this paper is trying to address.

Balancing diversity and precision in token prediction for RL
Adapting RL principles to shape pre-training token distributions
Investigating optimal exploration spaces for enhanced reasoning performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapting on-policy RL principles to supervised pre-training
Introducing reward shaping to balance diversity and precision
Using precision-oriented prior to enhance RL exploration space
🔎 Similar Papers
No similar papers found.
Haoyuan Wu
Haoyuan Wu
The Chinese University of Hong Kong
Generative AILarge Language ModelsMultimodal ModelsAgentic AIRepresentation Learning
H
Hai Wang
LLM Department, Tencent
Jiajia Wu
Jiajia Wu
University of California
Neural Interface IC DesignBioinstrumentationImage Sensor
J
Jinxiang Ou
LLM Department, Tencent
Keyao Wang
Keyao Wang
Baidu Inc.
deep learningface-anti spoofingcomputer vision
W
Weile Chen
LLM Department, Tencent
Z
Zihao Zheng
LLM Department, Tencent
B
Bei Yu
The Chinese University of Hong Kong