🤖 AI Summary
This work investigates the distinct mechanistic effects of reinforcement learning from verifiable reasoning (RLVR) and knowledge distillation on LLM reasoning accuracy and capability. Methodologically, we employ fine-grained difficulty-stratified evaluation coupled with joint analysis of response quality, length, and keyword coverage. Results show that RLVR improves accuracy only on easy questions but significantly degrades performance on the hardest ones, yielding no net capability gain; its benefit stems from generating novel high-quality responses—not probability calibration. In contrast, distillation enhances both accuracy and capability only when introducing genuinely new knowledge; otherwise, it replicates RLVR’s difficulty trade-off. This study is the first to characterize the fundamental boundaries of these two paradigms, proposing a difficulty-aware evaluation framework. It delivers interpretable mechanistic insights into LLM reasoning optimization and provides actionable guidance for practitioners.
📝 Abstract
Recent studies have shown that reinforcement learning with verifiable rewards (RLVR) enhances overall accuracy but fails to improve capability, while distillation can improve both. In this paper, we investigate the mechanisms behind these phenomena. First, we demonstrate that RLVR does not improve capability because it focuses on improving the accuracy of the less-difficult questions to the detriment of the accuracy of the most difficult questions, thereby leading to no improvement in capability. Second, we find that RLVR does not merely increase the success probability for the less difficult questions, but in our small model settings produces quality responses that were absent in its output distribution before training. In addition, we show these responses are neither noticeably longer nor feature more reflection-related keywords, underscoring the need for more reliable indicators of response quality. Third, we show that while distillation reliably improves accuracy by learning strong reasoning patterns, it only improves capability when new knowledge is introduced. Moreover, when distilling only with reasoning patterns and no new knowledge, the accuracy of the less-difficult questions improves to the detriment of the most difficult questions, similar to RLVR. Together, these findings offer a clearer understanding of how RLVR and distillation shape reasoning behavior in language models.