Incentivizing LLMs to Self-Verify Their Answers

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address distribution shift and scalability limitations in large language models’ (LLMs) post-training for mathematical reasoning—stemming from reliance on external reward models—this paper proposes an end-to-end self-verification reinforcement learning framework that unifies generation and verification. The method models generation and verification as a joint action space, enabling the LLM to autonomously assess answer correctness and eliminating distribution mismatch between the generator and external reward models. We conduct RL training on Qwen2.5-Math-7B and DeepSeek-R1-Distill-Qwen-1.5B, supporting variable-length reasoning contexts. Experiments demonstrate substantial improvements in post-training performance across multiple mathematical reasoning benchmarks. Moreover, the framework enables effective test-time adaptive scaling: during inference, iterative self-verification progressively refines outputs, yielding consistent accuracy gains. This approach advances scalable, self-contained reasoning without external reward supervision.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable progress in complex reasoning tasks through both post-training and test-time scaling laws. While prevalent test-time scaling approaches are often realized by using external reward models to guide the model generation process, we find only marginal gains can be acquired when scaling a model post-trained on specific reasoning tasks. We identify that the limited improvement stems from distribution discrepancies between the specific post-trained generator and the general reward model. To address this, we propose a framework that incentivizes LLMs to self-verify their own answers. By unifying answer generation and verification within a single reinforcement learning (RL) process, we train models that can effectively assess the correctness of their own solutions. The trained model can further scale its performance during inference time by verifying its generations, without the need for external verifiers. We train our self-verification models based on Qwen2.5-Math-7B and DeepSeek-R1-Distill-Qwen-1.5B, demonstrating its capabilities across varying reasoning context lengths. Experiments on multiple mathematical reasoning benchmarks show that our models can not only improve post-training performance but also enable effective test-time scaling. Our code is available at https://github.com/mansicer/self-verification.
Problem

Research questions and friction points this paper is trying to address.

Addressing distribution gaps between task-specific LLMs and general reward models
Enabling LLMs to self-verify answers without external verifiers
Improving reasoning performance via unified generation-verification RL training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unifies answer generation and verification via RL
Trains LLMs to self-verify without external verifiers
Scales performance during inference via self-verification
🔎 Similar Papers
No similar papers found.
Fuxiang Zhang
Fuxiang Zhang
Nanyang Technological University
Language ModelingReinforcement Learning
Jiacheng Xu
Jiacheng Xu
Nanyang Technological University
Reinforcement LearningLarge Language Model
C
Chaojie Wang
Skywork AI, Singapore
C
Ce Cui
Skywork AI, Singapore
Y
Yang Liu
Skywork AI, Singapore
B
Bo An
Nanyang Technological University, Singapore; Skywork AI, Singapore