🤖 AI Summary
Existing LLM mathematical benchmarks emphasize exact solutions or formal proofs, overlooking pervasive approximate modeling tasks in applied sciences. Method: We introduce HARDMath2—a high-quality, expert-curated benchmark focused on asymptotic analysis and applied mathematics—comprising 211 original problems spanning boundary layer theory, the WKB method, and asymptotic solutions to nonlinear PDEs. Developed collaboratively by Harvard faculty and students, it employs a novel “student-led, human–model interactive” construction paradigm: difficult problems are reverse-engineered from LLM failure cases and refined via human authoring, peer verification, automated LLM solving, numerical solution validation, and asymptotic modeling verification. Contribution/Results: State-of-the-art LLMs perform poorly on HARDMath2, revealing critical gaps in asymptotic reasoning. Notably, students deepened their own mathematical understanding by diagnosing model errors. HARDMath2 fills a fundamental gap in evaluating LLMs’ applied mathematical competence and establishes a new paradigm for rigorous, pedagogically informed reasoning assessment.
📝 Abstract
Large language models (LLMs) have shown remarkable progress in mathematical problem-solving, but evaluation has largely focused on problems that have exact analytical solutions or involve formal proofs, often overlooking approximation-based problems ubiquitous in applied science and engineering. To fill this gap, we build on prior work and present HARDMath2, a dataset of 211 original problems covering the core topics in an introductory graduate applied math class, including boundary-layer analysis, WKB methods, asymptotic solutions of nonlinear partial differential equations, and the asymptotics of oscillatory integrals. This dataset was designed and verified by the students and instructors of a core graduate applied mathematics course at Harvard. We build the dataset through a novel collaborative environment that challenges students to write and refine difficult problems consistent with the class syllabus, peer-validate solutions, test different models, and automatically check LLM-generated solutions against their own answers and numerical ground truths. Evaluation results show that leading frontier models still struggle with many of the problems in the dataset, highlighting a gap in the mathematical reasoning skills of current LLMs. Importantly, students identified strategies to create increasingly difficult problems by interacting with the models and exploiting common failure modes. This back-and-forth with the models not only resulted in a richer and more challenging benchmark but also led to qualitative improvements in the students' understanding of the course material, which is increasingly important as we enter an age where state-of-the-art language models can solve many challenging problems across a wide domain of fields.