Solving Inequality Proofs with Large Language Models

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit insufficient reasoning rigor in olympiad-level inequality proving due to data scarcity and over-formalization. Method: We propose a verifiable yet informal two-stage modeling paradigm—bounding estimation followed by relational prediction—and introduce IneqMath, the first high-quality inequality dataset featuring expert-annotated step-wise solutions and theorem citations. We further design a multi-granularity LLM-as-judge evaluation framework assessing both final-answer correctness and four categories of reasoning flaws at the step level. Contributions/Results: This work establishes the first verifiable informal inequality proving task; releases the first multi-level annotated inequality dataset; and develops a fine-grained, step-level evaluation protocol. Comprehensive evaluation of 29 mainstream LLMs reveals that even top-performing models achieve less than 10% step-level accuracy—up to 65.5 percentage points lower than their final-answer accuracy—demonstrating that scaling model size alone fails to improve proof rigor.

Technology Category

Application Category

📝 Abstract
Inequality proving, crucial across diverse scientific and mathematical fields, tests advanced reasoning skills such as discovering tight bounds and strategic theorem application. This makes it a distinct, demanding frontier for large language models (LLMs), offering insights beyond general mathematical problem-solving. Progress in this area is hampered by existing datasets that are often scarce, synthetic, or rigidly formal. We address this by proposing an informal yet verifiable task formulation, recasting inequality proving into two automatically checkable subtasks: bound estimation and relation prediction. Building on this, we release IneqMath, an expert-curated dataset of Olympiad-level inequalities, including a test set and training corpus enriched with step-wise solutions and theorem annotations. We also develop a novel LLM-as-judge evaluation framework, combining a final-answer judge with four step-wise judges designed to detect common reasoning flaws. A systematic evaluation of 29 leading LLMs on IneqMath reveals a surprising reality: even top models like o1 achieve less than 10% overall accuracy under step-wise scrutiny; this is a drop of up to 65.5% from their accuracy considering only final answer equivalence. This discrepancy exposes fragile deductive chains and a critical gap for current LLMs between merely finding an answer and constructing a rigorous proof. Scaling model size and increasing test-time computation yield limited gains in overall proof correctness. Instead, our findings highlight promising research directions such as theorem-guided reasoning and self-refinement. Code and data are available at https://ineqmath.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Addressing scarcity of datasets for inequality proving in LLMs
Proposing verifiable subtasks for automated inequality proof checking
Evaluating LLMs' proof rigor versus final-answer accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Informal verifiable inequality proving formulation
IneqMath expert-curated Olympiad dataset
LLM-as-judge step-wise evaluation framework
🔎 Similar Papers
No similar papers found.