🤖 AI Summary
This work addresses a critical gap in existing mathematical formalization benchmarks, which predominantly focus on propositional verification while neglecting the evaluation of explicit solution construction—such as numerical values or algorithms—particularly in applied mathematics. To bridge this gap, the authors propose a construct-and-verify workflow framework that requires agents to first generate concrete solutions and then formally prove their correctness. Building upon this framework, they introduce AMBER, a novel benchmark for applied mathematical reasoning spanning convex analysis, optimization, numerical linear algebra, and high-dimensional probability. Implemented in Lean 4, this benchmark enables the first systematic evaluation of large language models on constructive tasks, revealing that general-purpose reasoning models significantly outperform specialized theorem provers, the latter suffering from “tactic overfitting” that limits their generalization. The study further underscores the pivotal role of instruction-following capability in multi-task formal reasoning.
📝 Abstract
Recent advances in large language models have demonstrated impressive capabilities in mathematical formalization. However, existing benchmarks focus on logical verification of declarative propositions, often neglecting the task of explicitly synthesizing solutions. This limitation is particularly acute in applied mathematics domains, where the goal is frequently to derive concrete values or executable algorithms rather than solely proving theorems. To address this, we introduce a Lean 4 framework that enforces a construction-verification workflow, compelling the agent to define explicit solutions before proving their correctness. We curate a comprehensive benchmark AMBER (Applied Mathematics BEnchmark for Reasoning) spanning core domains of applied mathematics, including convex analysis, optimization, numerical algebra, and high-dimensional probability. Aside from theorem proving, our benchmark features complex tasks such as evaluation, algorithm design, and representation transformation. Experiments reveal that current models face significant difficulties with these constructive tasks. Notably, we observe that general-purpose reasoning models consistently outperform specialized theorem provers. We attribute this to a degradation of instruction following capabilities in specialized models. Fine-tuning on proof corpora appears to induce ``tactical overfitting", compromising the ability to adhere to complex constructive requirements, whereas general models retain the versatility needed for multi-task formal reasoning.