🤖 AI Summary
Small language models (e.g., Llama-3-8B, DeepSeekMath-Base) struggle to autonomously detect and correct errors in complex mathematical reasoning. Method: We propose a two-stage supervised correction framework: (1) hierarchical thinking template distillation from large teacher models to guide student models in generating fine-grained, stepwise reasoning; and (2) cross-model collaborative direct preference optimization (DPO), explicitly incorporating teacher-generated, error-driven correction trajectories into training. Contribution/Results: This work introduces the first “hierarchical template distillation + error-driven reflection” joint supervision paradigm and designs the first DPO mechanism tailored for mathematical reasoning that leverages cross-model collaboration. Experiments show SuperCorrect-7B achieves new state-of-the-art performance among 7B-scale models, outperforming DeepSeekMath-7B by 7.8% on MATH and 5.3% on GSM8K, and surpassing Qwen2.5-Math-7B by 15.1% on MATH and 6.3% on GSM8K.
📝 Abstract
Large language models (LLMs) like GPT-4, PaLM, and LLaMA have shown significant improvements in various reasoning tasks. However, smaller models such as Llama-3-8B and DeepSeekMath-Base still struggle with complex mathematical reasoning because they fail to effectively identify and correct reasoning errors. Recent reflection-based methods aim to address these issues by enabling self-reflection and self-correction, but they still face challenges in independently detecting errors in their reasoning steps. To overcome these limitations, we propose SuperCorrect, a novel two-stage framework that uses a large teacher model to supervise and correct both the reasoning and reflection processes of a smaller student model. In the first stage, we extract hierarchical high-level and detailed thought templates from the teacher model to guide the student model in eliciting more fine-grained reasoning thoughts. In the second stage, we introduce cross-model collaborative direct preference optimization (DPO) to enhance the self-correction abilities of the student model by following the teacher's correction traces during training. This cross-model DPO approach teaches the student model to effectively locate and resolve erroneous thoughts with error-driven insights from the teacher model, breaking the bottleneck of its thoughts and acquiring new skills and knowledge to tackle challenging problems. Extensive experiments consistently demonstrate our superiority over previous methods. Notably, our SuperCorrect-7B model significantly surpasses powerful DeepSeekMath-7B by 7.8%/5.3% and Qwen2.5-Math-7B by 15.1%/6.3% on MATH/GSM8K benchmarks, achieving new SOTA performance among all 7B models. Code: https://github.com/YangLing0818/SuperCorrect-llm