MSC-180: A Benchmark for Automated Formal Theorem Proving from Mathematical Subject Classification

📅 2025-12-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-based theorem provers suffer from narrow mathematical domain coverage and poor generalization across disciplines. Method: We introduce MSC-180, the first formalized mathematical benchmark grounded in the MSC2020 classification system, comprising 180 expert-validated undergraduate- to graduate-level problems spanning 60 subfields. We propose a cross-disciplinary balanced evaluation framework and a novel Coefficient of Variation (CV) metric to quantify performance disparities across mathematical domains. Contribution/Results: Experiments reveal that state-of-the-art models achieve only 18.89% overall pass@32 accuracy, with maximal domain coverage of 41.7% and significantly degraded performance on graduate-level problems. CV values reach 4–6× typical thresholds, indicating heavy reliance on superficial pattern matching rather than systematic mathematical reasoning. MSC-180 establishes a standardized, scalable benchmark for rigorously evaluating and advancing LLMs’ mathematical generalization capabilities.

Technology Category

Application Category

📝 Abstract
Automated Theorem Proving (ATP) represents a core research direction in artificial intelligence for achieving formal reasoning and verification, playing a significant role in advancing machine intelligence. However, current large language model (LLM)-based theorem provers suffer from limitations such as restricted domain coverage and weak generalization in mathematical reasoning. To address these issues, we propose MSC-180, a benchmark for evaluation based on the MSC2020 mathematical subject classification. It comprises 180 formal verification problems, 3 advanced problems from each of 60 mathematical branches, spanning from undergraduate to graduate levels. Each problem has undergone multiple rounds of verification and refinement by domain experts to ensure formal accuracy. Evaluations of state-of-the-art LLM-based theorem provers under the pass@32 setting reveal that the best model achieves only an 18.89% overall pass rate, with prominent issues including significant domain bias (maximum domain coverage 41.7%) and a difficulty gap (significantly lower pass rates on graduate-level problems). To further quantify performance variability across mathematical domains, we introduce the coefficient of variation (CV) as an evaluation metric. The observed CV values are 4-6 times higher than the statistical high-variability threshold, indicating that the models still rely on pattern matching from training corpora rather than possessing transferable reasoning mechanisms and systematic generalization capabilities. MSC-180, together with its multi-dimensional evaluation framework, provides a discriminative and systematic benchmark for driving the development of next-generation AI systems with genuine mathematical reasoning abilities.
Problem

Research questions and friction points this paper is trying to address.

Addresses limitations in LLM-based theorem provers' domain coverage and generalization
Proposes MSC-180 benchmark with 180 formal verification problems across 60 mathematical branches
Evaluates models' performance variability and reasoning capabilities using multi-dimensional metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

MSC-180 benchmark with 180 formal verification problems
Multi-dimensional evaluation using coefficient of variation metric
Systematic benchmark spanning 60 mathematical branches
🔎 Similar Papers
No similar papers found.
S
Sirui Li
Northeastern University
W
Wangyue Lu
Northeastern University
X
Xiaorui Shi
Northeastern University
K
Ke Weng
Northeastern University
H
Haozhe Sun
Northeastern University
M
Minghe Yu
Northeastern University
Tiancheng Zhang
Tiancheng Zhang
Northeastern University, China
user profiledeep learningmachine learning,intelligent education
G
Ge Yu
Northeastern University
H
Hengyu Liu
Department of Computer Science, Aalborg University
L
Lun Du
Independent Researcher