LaSeR: Reinforcement Learning with Last-Token Self-Rewarding

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RLVR paradigms lack validation signals during inference, and prior self-verification approaches require two-stage prompting—first reasoning, then verification—resulting in inefficiency. This paper proposes LaSeR, an end-to-end self-rewarding reinforcement learning framework that, for the first time, reformulates the self-verification objective in closed form as a **self-reward score for the final generated token**, enabling simultaneous reasoning and verification with only one additional token generation. Methodologically, LaSeR computes the self-reward from the next-token prediction probability distribution and aligns it with verifier-derived rewards via mean squared error (MSE) loss. Experiments demonstrate that LaSeR significantly improves large language models’ reasoning performance and self-rewarding capability, enhances robustness to inference-time scaling (e.g., beam search width or sampling temperature), and drastically reduces verification overhead compared to multi-stage baselines.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has recently emerged as a core paradigm for enhancing the reasoning capabilities of Large Language Models (LLMs). To address the lack of verification signals at test time, prior studies incorporate the training of model's self-verification capability into the standard RLVR process, thereby unifying reasoning and verification capabilities within a single LLM. However, previous practice requires the LLM to sequentially generate solutions and self-verifications using two separate prompt templates, which significantly reduces efficiency. In this work, we theoretically reveal that the closed-form solution to the RL objective of self-verification can be reduced to a remarkably simple form: the true reasoning reward of a solution is equal to its last-token self-rewarding score, which is computed as the difference between the policy model's next-token log-probability assigned to any pre-specified token at the solution's last token and a pre-calculated constant, scaled by the KL coefficient. Based on this insight, we propose LaSeR (Reinforcement Learning with Last-Token Self-Rewarding), an algorithm that simply augments the original RLVR loss with a MSE loss that aligns the last-token self-rewarding scores with verifier-based reasoning rewards, jointly optimizing the reasoning and self-rewarding capabilities of LLMs. The optimized self-rewarding scores can be utilized in both training and testing to enhance model performance. Notably, our algorithm derives these scores from the predicted next-token probability distribution of the last token immediately after generation, incurring only the minimal extra cost of one additional token inference. Experiments show that our method not only improves the model's reasoning performance but also equips it with remarkable self-rewarding capability, thereby boosting its inference-time scaling performance.
Problem

Research questions and friction points this paper is trying to address.

Enhances reasoning capabilities of Large Language Models using reinforcement learning
Reduces computational inefficiency in self-verification during training and testing
Aligns last-token self-rewarding scores with verifier-based reasoning rewards
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses last-token self-rewarding scores for reinforcement learning
Augments RLVR loss with MSE loss for joint optimization
Derives scores from next-token probability at solution end
🔎 Similar Papers
No similar papers found.