🤖 AI Summary
Large language models (LLMs) lack standardized, formalized evaluation benchmarks for combinatorial mathematics. Method: We introduce CombiBench—the first Lean 4–based formal benchmark for combinatorics—comprising 100 proof and fill-in-the-blank problems spanning primary to International Mathematical Olympiad (IMO) difficulty levels across十余 combinatorial topics; we further propose Fine-Eval, the first automated framework enabling precise scoring of formalized fill-in-the-blank problems, and comprehensively formalize all non-geometric IMO combinatorics problems since 2000. Contribution/Results: Experiments reveal severely limited zero-shot formal solving capabilities of current LLMs (maximum 7/100 solved), underscoring the profound challenge of formal combinatorial reasoning. CombiBench fills a critical gap in the field, providing a reproducible, extensible benchmark and evaluation infrastructure to advance research in formal mathematical reasoning.
📝 Abstract
Neurosymbolic approaches integrating large language models with formal reasoning have recently achieved human-level performance on mathematics competition problems in algebra, geometry and number theory. In comparison, combinatorics remains a challenging domain, characterized by a lack of appropriate benchmarks and theorem libraries. To address this gap, we introduce CombiBench, a comprehensive benchmark comprising 100 combinatorial problems, each formalized in Lean~4 and paired with its corresponding informal statement. The problem set covers a wide spectrum of difficulty levels, ranging from middle school to IMO and university level, and span over ten combinatorial topics. CombiBench is suitable for testing IMO solving capabilities since it includes all IMO combinatorial problems since 2000 (except IMO 2004 P3 as its statement contain an images). Furthermore, we provide a comprehensive and standardized evaluation framework, dubbed Fine-Eval (for $ extbf{F}$ill-in-the-blank $ extbf{in}$ L$ extbf{e}$an Evaluation), for formal mathematics. It accommodates not only proof-based problems but also, for the first time, the evaluation of fill-in-the-blank questions. Using Fine-Eval as the evaluation method and Kimina Lean Server as the backend, we benchmark several LLMs on CombiBench and observe that their capabilities for formally solving combinatorial problems remain limited. Among all models tested (none of which has been trained for this particular task), Kimina-Prover attains the best results, solving 7 problems (out of 100) under both ``with solution'' and ``without solution'' scenarios. We open source the benchmark dataset alongside with the code of the proposed evaluation method at https://github.com/MoonshotAI/CombiBench/.