๐ค AI Summary
Existing retrieval-augmented automated formalization methods suffer from semantic ambiguity in informal mathematical statements and lack of contextual grounding, hindering precise retrieval of requisite premises. This paper proposes DRIFT, a framework that explicitly decomposes complex propositions via **subproblem decomposition**, integrates an **adaptive retrieval mechanism**โdynamically tailoring queries to the knowledge boundaries of different large language models (LLMs)โand employs **example-guided formalization generation**, jointly optimizing premise matching and Lean formalization. DRIFT deeply unifies retrieval-augmented generation (RAG), mathematical library (e.g., mathlib) retrieval, and structured reasoning. On ProofNet, it achieves nearly 100% improvement in F1 score; on ConNF, BEq+@10 improves by up to 42.25%. Moreover, it significantly enhances cross-distribution generalization. DRIFT establishes a new, interpretable, and scalable paradigm for LLM-driven mathematical formalization.
๐ Abstract
Automating the formalization of mathematical statements for theorem proving remains a major challenge for Large Language Models (LLMs). LLMs struggle to identify and utilize the prerequisite mathematical knowledge and its corresponding formal representation in languages like Lean. Current retrieval-augmented autoformalization methods query external libraries using the informal statement directly, but overlook a fundamental limitation: informal mathematical statements are often complex and offer limited context on the underlying math concepts. To address this, we introduce DRIFT, a novel framework that enables LLMs to decompose informal mathematical statements into smaller, more tractable ''sub-components''. This facilitates targeted retrieval of premises from mathematical libraries such as Mathlib. Additionally, DRIFT retrieves illustrative theorems to help models use premises more effectively in formalization tasks. We evaluate DRIFT across diverse benchmarks (ProofNet, ConNF, and MiniF2F-test) and find that it consistently improves premise retrieval, nearly doubling the F1 score compared to the DPR baseline on ProofNet. Notably, DRIFT demonstrates strong performance on the out-of-distribution ConNF benchmark, with BEq+@10 improvements of 37.14% and 42.25% using GPT-4.1 and DeepSeek-V3.1, respectively. Our analysis shows that retrieval effectiveness in mathematical autoformalization depends heavily on model-specific knowledge boundaries, highlighting the need for adaptive retrieval strategies aligned with each model's capabilities.