ReliableMath: Benchmark of Reliable Mathematical Reasoning on Large Language Models

📅 2025-07-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently generate spurious answers to mathematically unsolvable problems, yet existing reliability research focuses predominantly on knowledge-based unanswerable questions, lacking systematic evaluation of mathematical unsolvability. Method: We introduce ReliableMath—the first benchmark dedicated to reliability assessment in mathematical reasoning—featuring a human-verified pipeline for constructing unsolvable problems and integrating them with high-quality open-source solvable instances. We further propose a reliable prompting mechanism and a domain-specific alignment strategy tailored for unsolvability detection. Contribution/Results: Experiments demonstrate that prompt optimization significantly improves large-model accuracy in identifying unsolvable problems, while small models achieve substantial gains in reliability after alignment—both in-domain and cross-domain. ReliableMath establishes a new methodological paradigm and provides practical tools for advancing reliability-aware mathematical reasoning research.

Technology Category

Application Category

📝 Abstract
Although demonstrating remarkable performance on reasoning tasks, Large Language Models (LLMs) still tend to fabricate unreliable responses when confronted with problems that are unsolvable or beyond their capability, severely undermining the reliability. Prior studies of LLM reliability have primarily focused on knowledge tasks to identify unanswerable questions, while mathematical reasoning tasks have remained unexplored due to the dearth of unsolvable math problems. To systematically investigate LLM reliability in mathematical reasoning tasks, we formulate the reliability evaluation for both solvable and unsolvable problems. We then develop a ReliableMath dataset which incorporates open-source solvable problems and high-quality unsolvable problems synthesized by our proposed construction workflow with human evaluations. Experiments are conducted on various LLMs with several key findings uncovered. LLMs fail to directly identify unsolvable problems and always generate fabricated responses. When instructing LLMs to indicate unsolvability using a reliable prompt, the reliability of larger-sized LLMs remains on solvable problems, but notably improves on unsolvable problems yet still falls short of solvable problems. However, small LLMs rarely show any progress despite employing reliable prompts. Therefore, we further propose an alignment strategy to enhance small LLMs' reliability, which can significantly improve LLM reliability performances on both in-domain and out-of-domain tasks.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM reliability on solvable and unsolvable math problems
Addressing fabrication in LLM responses to unsolvable math questions
Improving small LLMs' reliability via alignment strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Develops ReliableMath dataset for solvable and unsolvable problems
Proposes reliable prompt to indicate unsolvability in LLMs
Introduces alignment strategy to enhance small LLMs' reliability
🔎 Similar Papers
No similar papers found.
Boyang Xue
Boyang Xue
Ph.D. Candidate in The Chinese University of Hong Kong
Natural Language ProcessingLarge Language ModelsSpeech Recognition
Q
Qi Zhu
Huawei Noah’s Ark Lab
R
Rui Wang
The Chinese University of Hong Kong
S
Sheng Wang
The University of Hong Kong
H
Hongru Wang
The Chinese University of Hong Kong
Fei Mi
Fei Mi
Huawei Noah's Ark Lab
LLM Post Training
Yasheng Wang
Yasheng Wang
Tencent
Natural Language Processing
Lifeng Shang
Lifeng Shang
Huawei Noah's Ark Lab
Machine LearningComputer VisionPattern ReconitionNatural Language Processing
Q
Qun Liu
Huawei Noah’s Ark Lab
K
Kam-Fai Wong
The Chinese University of Hong Kong