Do Not Step Into the Same River Twice: Learning to Reason from Trial and Error

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing reward-verifiable reinforcement learning (RLVR) methods rely on initial model responses for training, often leading to exploration stagnation due to inherent capability limitations and hindering sustained improvement in mathematical reasoning. To address this, we propose LTE (Learning from Temporal Errors), the first approach that leverages a large language model’s own historical incorrect answers as self-supervised feedback—eliminating the need for external expert annotations and enabling low-cost, self-driven exploration-exploitation balancing. LTE integrates a self-reflection mechanism with response-length control into Group Relative Policy Optimization (GRPO). Evaluated across six mathematical reasoning benchmarks, LTE improves Qwen3-4B-Base’s Pass@1 by an average of 6.38 and Pass@k by 9.00, significantly mitigating exploration stagnation and enhancing training efficiency.

Technology Category

Application Category

📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) has significantly boosted the reasoning capability of large language models (LLMs) recently. However, existing RLVR approaches merely train LLMs based on their own generated responses and are constrained by the initial capability of LLMs, thus prone to exploration stagnation, in which LLMs fail to solve more training problems and cannot further learn from the training data. Some work tries to address this by leveraging off-policy solutions to training problems but requires external guidance from experts which suffers from limited availability. In this work, we propose LTE (Learning to reason from Trial and Error), an approach hinting LLMs with their previously self-generated incorrect answers and problem of overlong responses, which does not require any external expert guidance. Experiments validate the effectiveness of LTE, which outperforms the normal group relative policy optimization (GRPO) by 6.38 in Pass@1 and 9.00 in Pass@k on average across six mathematics benchmarks for Qwen3-4B-Base. Further analysis confirms that LTE successfully mitigates the problem of exploration stagnation and enhances both exploitation and exploration during training.
Problem

Research questions and friction points this paper is trying to address.

Addresses exploration stagnation in reinforcement learning for reasoning
Reduces reliance on external expert guidance during training
Mitigates issues of incorrect answers and overlong responses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses self-generated incorrect answers as hints
Addresses overlong response problems autonomously
Avoids external expert guidance requirements
🔎 Similar Papers
No similar papers found.
C
Chenming Tang
National Key Laboratory for Multimedia Information Processing, Peking University
H
Hsiu-Yuan Huang
National Key Laboratory for Multimedia Information Processing, Peking University
Weijie Liu
Weijie Liu
Nankai University
System SecurityVirtualizationBinary AnalysisImage Fusion
S
Saiyong Yang
LLM Department, Tencent
Yunfang Wu
Yunfang Wu
Peking University
NLP