🤖 AI Summary
This work addresses a key limitation in existing reinforcement learning approaches for reasoning, which typically optimize only for answer correctness while neglecting the robustness of the reasoning process itself. The authors propose to conceptualize reasoning as a transferable meaning-conveyance process and formally define reasoning robustness in terms of cross-model guidance capability—a novel perspective introduced in this study. Building upon the RLTR framework, they design a new reward mechanism that incorporates a “transfer reward” to evaluate the reusability and generality of reasoning segments. Experimental results demonstrate that the proposed method improves Maj@64 accuracy by 3.6 percentage points on MATH500 and achieves performance comparable to RLVR with approximately 2.5 times fewer training steps, substantially enhancing both sample efficiency and reasoning consistency.
📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has recently strengthened LLM reasoning, but its focus on final answer correctness leaves a critical gap: it does not ensure the robustness of the reasoning process itself. We adopt a simple philosophical view, robust reasoning should remain useful beyond the mind that produced it, and treat reasoning as a form of meaning transfer that must survive truncation, reinterpretation, and continuation. Building on this principle, we introduce Reinforcement Learning with Transferable Reward (RLTR), which operationalizes robustness via transfer reward that tests whether a partial reasoning prefix from one model can guide a separate model to the correct answer. This encourages LLMs to produce reasoning that is stable, interpretable, and genuinely generalizable. Our approach improves sampling consistency while improving final answer accuracy, and it reaches comparable performance in substantially fewer training steps. For example, on MATH500, RLTR achieves a +3.6%p gain in Maj@64 compared to RLVR and matches RLVR's average accuracy with roughly 2.5x fewer training steps, providing both more reliable reasoning and significantly more sample efficient.