🤖 AI Summary
This work addresses the limitation of reinforcement learning with verifiable rewards (RLVR) in general domains, where performance is constrained by domain-specific external verifiers. We propose RLPR—a verifier-free framework that eliminates reliance on external validation modules. Its core innovation lies in the first use of token-level generation probabilities from large language models (LLMs) to construct a differentiable, end-to-end reward signal for reasoning. We introduce a prob-to-reward transformation mechanism and a variance suppression strategy—including reward normalization and stability-aware training—to significantly mitigate reward noise. RLPR enables cross-domain reasoning transfer without domain adaptation. Empirically, it outperforms strong baselines—including VeriFree (+7.6 points) and General-Reasoner—across four general-purpose task domains and three mathematical benchmarks. The method consistently enhances generalization and reasoning capabilities of Gemma, Llama, and Qwen model families.
📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) demonstrates promising potential in advancing the reasoning capabilities of LLMs. However, its success remains largely confined to mathematical and code domains. This primary limitation stems from the heavy reliance on domain-specific verifiers, which results in prohibitive complexity and limited scalability. To address the challenge, our key observation is that LLM's intrinsic probability of generating a correct free-form answer directly indicates its own evaluation of the reasoning reward (i.e., how well the reasoning process leads to the correct answer). Building on this insight, we propose RLPR, a simple verifier-free framework that extrapolates RLVR to broader general domains. RLPR uses the LLM's own token probability scores for reference answers as the reward signal and maximizes the expected reward during training. We find that addressing the high variance of this noisy probability reward is crucial to make it work, and propose prob-to-reward and stabilizing methods to ensure a precise and stable reward from LLM intrinsic probabilities. Comprehensive experiments in four general-domain benchmarks and three mathematical benchmarks show that RLPR consistently improves reasoning capabilities in both areas for Gemma, Llama, and Qwen based models. Notably, RLPR outperforms concurrent VeriFree by 7.6 points on TheoremQA and 7.5 points on Minerva, and even surpasses strong verifier-model-dependent approaches General-Reasoner by 1.6 average points across seven benchmarks.