Automatically Generating Hard Math Problems from Hypothesis-Driven Error Analysis

📅 2026-04-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing mathematical benchmarks, which struggle to scale automatically and fail to precisely identify the specific mathematical concepts and skills where large language models (LLMs) exhibit weaknesses. The authors propose a hypothesis-driven framework for automated benchmark generation that first leverages LLMs to formulate hypotheses about model failure modes, then employs error attribution analysis to pinpoint underlying deficiencies, and finally synthesizes challenging, cross-category mathematical problems tailored to those weaknesses. This approach uniquely integrates hypothesis-driven error analysis into benchmark construction, substantially enhancing both the diagnostic precision and generalization capability of the generated problems. Experimental results demonstrate that the synthesized benchmark reduces the accuracy of Llama-3.3-70B-Instruct from 77% on the MATH benchmark to 45%, effectively exposing critical model shortcomings.
📝 Abstract
Numerous math benchmarks exist to evaluate LLMs' mathematical capabilities. However, most involve extensive manual effort and are difficult to scale. Consequently, they cannot keep pace with LLM development or easily provide new instances to mitigate overfitting. Some researchers have proposed automatic benchmark generation methods, but few focus on identifying the specific math concepts and skills on which LLMs are error-prone, and most can only generate category-specific benchmarks. To address these limitations, we propose a new math benchmark generation pipeline that uses AI-generated hypotheses to identify the specific math concepts and skills that LLMs struggle with, and then generates new benchmark problems targeting these weaknesses. Experiments show that hypothesis accuracy positively correlates with the difficulty of the generated problems: problems generated from the most accurate hypotheses reduce Llama-3.3-70B-Instruct's accuracy to as low as 45%, compared to 77% on the original MATH benchmark. Furthermore, our pipeline is highly adaptable and can be applied beyond math to explore a wide range of LLM capabilities, making it a valuable tool for investigating how LLMs perform across different domains.
Problem

Research questions and friction points this paper is trying to address.

math benchmark
LLM evaluation
automatic problem generation
overfitting
error-prone concepts
Innovation

Methods, ideas, or system contributions that make the work stand out.

hypothesis-driven error analysis
automatic benchmark generation
hard math problem generation
LLM weakness identification
adaptive evaluation pipeline
🔎 Similar Papers
J
Jiayu Fu
Department of Computer Science, University of Chicago, Chicago, IL 60637, USA
M
Mourad Heddaya
Department of Computer Science, University of Chicago, Chicago, IL 60637, USA
Chenhao Tan
Chenhao Tan
University of Chicago
Human-centered AICommunication & IntelligenceScientific DiscoveryAI alignmentAI governance