Teaching Large Language Models to Reason through Learning and Forgetting

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) incur high computational overhead and latency during inference when relying on external search procedures (e.g., Monte Carlo Tree Search or beam search), while standard supervised fine-tuning (SFT) often rapidly degrades their inherent search capabilities. Method: We propose a “Learn–Forget” collaborative fine-tuning paradigm that integrates multi-source reasoning trajectories—including both successful and failed paths from MCTS and beam search—into fine-grained supervised training. Crucially, we employ a small learning rate to mitigate search capability degradation, thereby internalizing search behavior directly into model parameters without runtime search. Contribution/Results: Our method achieves state-of-the-art performance on Game-of-24 and Countdown mathematical reasoning benchmarks, substantially outperforming both standard SFT and inference-time search baselines. It accelerates inference by 180× while maintaining or improving solution accuracy. This work is the first to systematically identify, analyze, and resolve the search capability degradation problem in LLM fine-tuning, establishing a new paradigm for efficient, lightweight reasoning.

Technology Category

Application Category

📝 Abstract
Leveraging inference-time search in large language models has proven effective in further enhancing a trained model's capability to solve complex mathematical and reasoning problems. However, this approach significantly increases computational costs and inference time, as the model must generate and evaluate multiple candidate solutions to identify a viable reasoning path. To address this, we propose an effective approach that integrates search capabilities directly into the model by fine-tuning it using both successful (learning) and failed reasoning paths (forgetting) derived from diverse search methods. While fine-tuning the model with these data might seem straightforward, we identify a critical issue: the model's search capability tends to degrade rapidly if fine-tuning is performed naively. We show that this degradation can be substantially mitigated by employing a smaller learning rate. Extensive experiments on the challenging Game-of-24 and Countdown mathematical reasoning benchmarks show that our approach not only outperforms both standard fine-tuning and inference-time search baselines but also significantly reduces inference time by 180$ imes$.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM reasoning without high computational costs
Preventing search capability degradation during fine-tuning
Reducing inference time while improving problem-solving accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning with learning and forgetting paths
Smaller learning rate mitigates degradation
Reduces inference time by 180 times
🔎 Similar Papers
No similar papers found.