🤖 AI Summary
Existing reward-verifiable reinforcement learning (RLVR) methods rely on initial model responses for training, often leading to exploration stagnation due to inherent capability limitations and hindering sustained improvement in mathematical reasoning. To address this, we propose LTE (Learning from Temporal Errors), the first approach that leverages a large language model’s own historical incorrect answers as self-supervised feedback—eliminating the need for external expert annotations and enabling low-cost, self-driven exploration-exploitation balancing. LTE integrates a self-reflection mechanism with response-length control into Group Relative Policy Optimization (GRPO). Evaluated across six mathematical reasoning benchmarks, LTE improves Qwen3-4B-Base’s Pass@1 by an average of 6.38 and Pass@k by 9.00, significantly mitigating exploration stagnation and enhancing training efficiency.
📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) has significantly boosted the reasoning capability of large language models (LLMs) recently. However, existing RLVR approaches merely train LLMs based on their own generated responses and are constrained by the initial capability of LLMs, thus prone to exploration stagnation, in which LLMs fail to solve more training problems and cannot further learn from the training data. Some work tries to address this by leveraging off-policy solutions to training problems but requires external guidance from experts which suffers from limited availability. In this work, we propose LTE (Learning to reason from Trial and Error), an approach hinting LLMs with their previously self-generated incorrect answers and problem of overlong responses, which does not require any external expert guidance. Experiments validate the effectiveness of LTE, which outperforms the normal group relative policy optimization (GRPO) by 6.38 in Pass@1 and 9.00 in Pass@k on average across six mathematics benchmarks for Qwen3-4B-Base. Further analysis confirms that LTE successfully mitigates the problem of exploration stagnation and enhances both exploitation and exploration during training.