Achieving>97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Solvers for Math Word Problems

📅 2024-04-23
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) face a performance bottleneck in solving mathematical word problems due to semantic misinterpretation—yet this root cause remains under-identified and unaddressed in prior work. Method: We propose Deep Understanding of Problems (DUP), the first systematic approach to explicitly identify semantic misunderstanding as the primary error source. DUP reconstructs problem representations in a zero-shot, lightweight manner via a chain-of-thought–enhanced framework that jointly performs problem rephrasing, explicit extraction of key elements, and alignment of logical structure—requiring neither fine-tuning nor auxiliary modules. Contribution/Results: DUP achieves a new state-of-the-art 97.1% zero-shot accuracy on GSM8K and significantly outperforms existing methods across ten major mathematical reasoning benchmarks. It markedly improves both reasoning robustness and cross-task generalization, demonstrating strong efficacy without architectural or training overhead.

Technology Category

Application Category

📝 Abstract
Chain-of-Thought (CoT) prompting has enhanced the performance of Large Language Models (LLMs) across various reasoning tasks. However, CoT still falls short in dealing with complex math word problems, as it usually suffers from three pitfalls: semantic misunderstanding errors, calculation errors, and step-missing errors. Prior studies involve addressing the calculation errors and step-missing errors, but neglect the semantic misunderstanding errors, which is the major factor limiting the reasoning performance of LLMs. To this end, we propose a simple-yet-effective method, namely Deeply Understanding the Problems (DUP), to improve the LLMs' math problem-solving ability by addressing semantic misunderstanding errors. The core of our method is to encourage the LLMs to deeply understand the problems and extract the key problem-solving information used for better reasoning. Extensive experiments on 10 diverse reasoning benchmarks show that our DUP method consistently outperforms the other counterparts by a large margin. More encouragingly, DUP achieves a new SOTA result on the GSM8K benchmark, with an accuracy of 97.1% under the zero-shot setting.
Problem

Research questions and friction points this paper is trying to address.

Address semantic misunderstanding errors in math word problems
Improve LLMs' problem-solving via deep understanding (DUP)
Achieve state-of-the-art accuracy (97.1%) on GSM8K benchmark
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deeply Understanding Problems (DUP) method
Addresses semantic misunderstanding errors
Extracts key problem-solving information
🔎 Similar Papers
2024-10-07International Conference on Learning RepresentationsCitations: 241