TinyV: Reducing False Negatives in Verification Improves RL for LLM Reasoning

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In reinforcement learning (RL), validator false negatives—i.e., incorrect rejection of correct answers—severely distort reward signals, leading to gradient vanishing and inefficient training. This work is the first to systematically characterize the detrimental impact of false negatives on RL convergence. We propose tinyV, a lightweight, plug-and-play validation framework that synergistically combines a fine-tuned small model with rule-based validators to detect false negatives dynamically and recover misclassified outputs—without retraining the primary policy model. tinyV integrates dynamic reward shaping and task-specific adaptation for mathematical reasoning. Evaluated across multiple mathematical reasoning benchmarks, it achieves up to a 10% absolute improvement in pass rates, significantly accelerates convergence, and reduces the false negative rate from 38% to a substantially lower level.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning (RL) has become a powerful tool for enhancing the reasoning abilities of large language models (LLMs) by optimizing their policies with reward signals. Yet, RL's success relies on the reliability of rewards, which are provided by verifiers. In this paper, we expose and analyze a widespread problem--false negatives--where verifiers wrongly reject correct model outputs. Our in-depth study of the Big-Math-RL-Verified dataset reveals that over 38% of model-generated responses suffer from false negatives, where the verifier fails to recognize correct answers. We show, both empirically and theoretically, that these false negatives severely impair RL training by depriving the model of informative gradient signals and slowing convergence. To mitigate this, we propose tinyV, a lightweight LLM-based verifier that augments existing rule-based methods, which dynamically identifies potential false negatives and recovers valid responses to produce more accurate reward estimates. Across multiple math-reasoning benchmarks, integrating TinyV boosts pass rates by up to 10% and accelerates convergence relative to the baseline. Our findings highlight the critical importance of addressing verifier false negatives and offer a practical approach to improve RL-based fine-tuning of LLMs. Our code is available at https://github.com/uw-nsl/TinyV.
Problem

Research questions and friction points this paper is trying to address.

False negatives in verifiers impair RL training for LLMs
Existing verifiers wrongly reject correct model outputs frequently
Inaccurate rewards slow convergence and reduce reasoning performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight LLM-based verifier reduces false negatives
Dynamic identification of potential false negatives
Augments rule-based methods for accurate rewards
🔎 Similar Papers
No similar papers found.