MC-NEST: Enhancing Mathematical Reasoning in Large Language Models leveraging a Monte Carlo Self-Refine Tree

📅 2024-11-23
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit limited capability in abstract conceptual understanding, logical reasoning, and multi-step mathematical deduction. Method: We propose Monte Carlo Self-Correction Tree (MC-NEST), the first framework to deeply integrate Monte Carlo Tree Search (MCTS) with LLM-based self-evaluation and iterative self-correction. Leveraging Upper Confidence Bound (UCB)-guided diverse policy sampling, MC-NEST achieves balanced exploration and exploitation during reasoning and supports cross-model generalization—without requiring manual prompt engineering—enabling end-to-end symbolic multi-step reasoning. Contribution/Results: On AIME and MathOdyssey benchmarks, MC-NEST achieves pass@1 scores of 38.6% and 12.6% with GPT-4o, respectively, and improves solution correctness to 84.0% (GPT-4o) and 82.08% (Phi-3-mini), substantially outperforming existing state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Mathematical reasoning presents significant challenges for large language models (LLMs). To enhance their capabilities, we propose Monte Carlo Self-Refine Tree (MC-NEST), an extension of Monte Carlo Tree Search that integrates LLM-based self-refinement and self-evaluation for improved decision-making in complex reasoning tasks. MC-NEST balances exploration and exploitation using Upper Confidence Bound (UCT) scores combined with diverse selection policies. Through iterative critique and refinement, LLMs learn to reason more strategically. Empirical results demonstrate that MC-NEST with an importance sampling policy substantially improves GPT-4o's performance, achieving state-of-the-art pass@1 scores on Olympiad-level benchmarks. Specifically, MC-NEST attains a pass@1 of 38.6 on AIME and 12.6 on MathOdyssey. The solution quality for MC-NEST using GPT-4o and Phi-3-mini reaches 84.0% and 82.08%, respectively, indicating robust consistency across different LLMs. MC-NEST performs strongly across Algebra, Geometry, and Number Theory, benefiting from its ability to handle abstraction, logical deduction, and multi-step reasoning -- core skills in mathematical problem solving.
Problem

Research questions and friction points this paper is trying to address.

Enhancing mathematical reasoning in large language models
Improving decision-making in complex reasoning tasks
Achieving state-of-the-art performance on Olympiad-level benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Monte Carlo Tree Search with self-refinement
Balances exploration using UCT scores
Improves reasoning with iterative critique
🔎 Similar Papers
No similar papers found.