🤖 AI Summary
This work proposes Mechanic, a novel automated theorem proving system that addresses the inefficiency and contextual redundancy plaguing existing approaches when tackling complex mathematical problems. Traditional methods often suffer from costly full-proof regeneration or iterative error correction. Mechanic introduces a formal decomposition strategy centered on Lean’s `sorry` placeholders, precisely isolating unresolved subgoals while preserving the verified proof structure. By extracting failed subproblems into independent contexts, the system delegates them to large language model agents for targeted resolution. This approach simultaneously ensures proof reuse and maintains contextual conciseness, overcoming the limitations of conventional regeneration or repair paradigms. Evaluated on challenging mathematical competition benchmarks—including IMO 2025 and Putnam 2025—Mechanic demonstrates significant gains in proof efficiency.
📝 Abstract
Recent advances in large language models (LLMs) and LLM-based agents have substantially improved the capabilities of automated theorem proving. However, for problems requiring complex mathematical reasoning, current systems rarely succeed on the first try and must repeatedly modify their proof strategies. Existing approaches for handling failed attempts typically either discard the entire proof and regenerate it from scratch or iteratively fix errors within the proof. The former is inefficient, as it may abandon mostly correct reasoning due to localized errors, while the latter, although preserving prior progress, leads to progressively longer contexts which progressively degrades the model's ability to attend to the remaining unresolved subproblems. To address this dilemma, we propose Mechanic, a novel agent system that employs a sorry-driven formal decomposition strategy. By leveraging the sorry placeholder in Lean to precisely isolate unresolved subgoals while preserving the surrounding verified proof structure, Mechanic extracts each failed subproblem into a clean, self-contained context and resolves it independently. This avoids both the waste of full regeneration and the excessive context length induced by repeated repairs. Experimental results on challenging mathematical competition benchmarks, including IMO 2025 and Putnam 2025, demonstrate that our agent achieves significant advantages in proving efficiency.