🤖 AI Summary
Current mathematical reasoning research over-relies on correct samples, neglecting the potential value of erroneous data for fostering model self-reflection. Method: We propose a “learning from errors” paradigm, constructing a fine-grained “error–reflection–correction” triplet dataset. Our approach employs error-type-driven data augmentation and a model-aware smoothed reflection linking mechanism, enabling large language models (LLMs) to autonomously detect and rectify errors during generation via supervised fine-tuning—without requiring external critique models. Contribution/Results: This is the first work to achieve purely generative, end-to-end self-correction. On benchmarks including GSM8K and MATH, our method significantly outperforms strong baselines, improving both final answer accuracy and robustness of intermediate reasoning steps.
📝 Abstract
Large language models (LLMs) have demonstrated remarkable reasoning capability in solving mathematical problems. However, existing approaches primarily focus on improving the quality of correct training data, e.g., distilling high-quality correct solutions from advanced models, neglecting the value contained in error data, potentially hindering the model's reflective ability. Though some studies attempt to leverage error data, they often involve complex mechanisms, such as Monte Carlo Tree Search (MCTS) to explore error nodes. In this work, we propose to enhance LLMs' reasoning ability by Learning from Errors for Mathematical Advancement (LEMMA). LEMMA constructs data consisting of an incorrect solution with an erroneous step and a reflection connection to a correct solution for fine-tuning. Specifically, we systematically analyze the model-generated error types and introduce an error-type grounded mistake augmentation method to collect diverse and representative errors. Correct solutions are either from fixing the errors or generating a fresh start. Through a model-aware smooth reflection connection, the erroneous solution is transferred to the correct one. By fine-tuning on the constructed dataset, the model is able to self-correct errors autonomously within the generation process without relying on external critique models. Experimental results demonstrate that LEMMA achieves significant performance improvements over other strong baselines.