RLVR Training of LLMs Does Not Improve Thinking Ability for General QA: Evaluation Method and a Simple Solution

πŸ“… 2026-03-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing research indicates that reinforcement learning with verifiable rewards (RLVR) can enhance large language models’ reasoning capabilities on specific tasks, yet its ability to transfer intermediate reasoning processes to general question answering (GQA) remains unclear and is susceptible to reward shortcuts. This work proposes Cross-Generation, a novel evaluation framework that systematically assesses the transferability of reasoning in RLVR within GQA for the first time, revealing its limited effectiveness. To address this, we introduce Separated Training of Reasoning and Answer (START), which decouples reasoning and answer generation by first optimizing the reasoning process independently before producing the final answer. Experiments demonstrate that START consistently improves both intermediate reasoning quality and final answer accuracy across multiple GQA benchmarks and reinforcement learning algorithms, outperforming standard RL and RLVR approaches.

Technology Category

Application Category

πŸ“ Abstract
Reinforcement learning from verifiable rewards (RLVR) stimulates the thinking processes of large language models (LLMs), substantially enhancing their reasoning abilities on verifiable tasks. It is often assumed that similar gains should transfer to general question answering (GQA), but this assumption has not been thoroughly validated. To assess whether RLVR automatically improves LLM performance on GQA, we propose a Cross-Generation evaluation framework that measures the quality of intermediate reasoning by feeding the generated thinking context into LLMs of varying capabilities. Our evaluation leads to a discouraging finding: the efficacy of the thinking process on GQA tasks is markedly lower than on verifiable tasks, suggesting that explicit training on GQA remains necessary in addition to training on verifiable tasks. We further observe that direct RL training on GQA is less effective than RLVR. Our hypothesis is that, whereas verifiable tasks demand robust logical chains to obtain high rewards, GQA tasks often admit shortcuts to high rewards without cultivating high-quality thinking. To avoid possible shortcuts, we introduce a simple method, Separated Thinking And Response Training (START), which first trains only the thinking process, using rewards defined on the final answer. We show that START improves both the quality of thinking and the final answer across several GQA benchmarks and RL algorithms.
Problem

Research questions and friction points this paper is trying to address.

RLVR
General Question Answering
Thinking Ability
Reasoning Quality
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

RLVR
Cross-Generation evaluation
START
general question answering
reasoning quality
πŸ”Ž Similar Papers
No similar papers found.