🤖 AI Summary
This study investigates the efficacy of negative reinforcement—training exclusively on incorrect samples—for mathematical reasoning in large language models (LLMs). To address the limitation of conventional reinforcement learning (RL) methods, which require both correct and incorrect samples, we propose a Negative Sample Reweighting (NSR) objective that refines the model’s prior knowledge without introducing new behaviors. We formulate a policy-gradient-based RL framework and evaluate it on domain-specific models, including Qwen2.5-Math-7B and Qwen3-4B. Our approach achieves state-of-the-art performance on MATH, AIME 2025, and AMC23, outperforming established baselines such as PPO and GRPO. Notably, Pass@$k$—especially at $k=256$—shows substantial improvement, demonstrating that negative reinforcement offers superior robustness and generalization compared to positive reinforcement. The implementation is publicly available.
📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) is a promising approach for training language models (LMs) on reasoning tasks that elicit emergent long chains of thought (CoTs). Unlike supervised learning, it updates the model using both correct and incorrect samples via policy gradients. To better understand its mechanism, we decompose the learning signal into reinforcing correct responses and penalizing incorrect ones, referred to as Positive and Negative Sample Reinforcement (PSR and NSR), respectively. We train Qwen2.5-Math-7B and Qwen3-4B on a mathematical reasoning dataset and uncover a surprising result: training with only negative samples -- without reinforcing correct responses -- can be highly effective: it consistently improves performance over the base model across the entire Pass@$k$ spectrum ($k$ up to $256$), often matching or surpassing PPO and GRPO. In contrast, reinforcing only correct responses improves Pass@$1$ but degrades performance at higher $k$, due to reduced diversity. These inference-scaling trends highlight that solely penalizing incorrect responses may contribute more to performance than previously recognized. Through gradient analysis, we show that NSR works by suppressing incorrect generations and redistributing probability mass toward other plausible candidates, guided by the model's prior beliefs. It refines the model's existing knowledge rather than introducing entirely new behaviors. Building on this insight, we propose a simple variant of the RL objective that upweights NSR, and show that it consistently improves overall Pass@$k$ performance on MATH, AIME 2025, and AMC23. Our code is available at https://github.com/TianHongZXY/RLVR-Decomposed.