Math Natural Language Inference: this should be easy!

📅 2025-07-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates large language models’ (LLMs) natural language inference (NLI) capability on mathematical text—termed *Math NLI*. To this end, the authors formally define the Math NLI task for the first time, construct a dual-version benchmark dataset grounded in authentic mathematical corpora—one annotated manually by experts and the other via LLM-generated labels—and introduce group consistency analysis coupled with majority voting to rigorously assess model reasoning quality. Key contributions and findings include: (1) Majority voting across diverse LLMs achieves performance comparable to human expert annotations, empirically validating collective intelligence as an effective proxy for evaluating mathematical reasoning; (2) State-of-the-art LLMs exhibit substantial deficiencies in elementary mathematical logical inference and demonstrate weaker-than-expected robustness to hypothesis bias. This study establishes a novel benchmark, methodology, and analytical framework for mathematically grounded NLI evaluation, offering foundational insights for advancing rigorous assessment of mathematical reasoning in LLMs.

Technology Category

Application Category

📝 Abstract
We ask whether contemporary LLMs are able to perform natural language inference (NLI) tasks on mathematical texts. We call this the Math NLI problem. We construct a corpus of Math NLI pairs whose premises are from extant mathematical text and whose hypotheses and gold labels were provided by people with experience in both research-level mathematics and also in the NLI field. We also investigate the quality of corpora using the same premises but whose hypotheses are provided by LLMs themselves. We not only investigate the performance but also the inter-group consistency of the diverse group of LLMs. We have both positive and negative findings. Among our positive findings: in some settings, using a majority vote of LLMs is approximately equivalent to using human-labeled data in the Math NLI area. On the negative side: LLMs still struggle with mathematical language. They occasionally fail at even basic inferences. Current models are not as prone to hypothesis-only "inference" in our data the way the previous generation had been. In addition to our findings, we also provide our corpora as data to support future work on Math NLI.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' ability to perform Math NLI tasks
Evaluating human vs LLM-generated Math NLI corpus quality
Investigating LLMs' consistency and performance in mathematical language
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constructed Math NLI corpus with human-labeled data
Evaluated LLM-generated hypotheses quality
Assessed LLM performance and inter-group consistency
🔎 Similar Papers
No similar papers found.