DRIFT: Decompose, Retrieve, Illustrate, then Formalize Theorems

๐Ÿ“… 2025-10-12
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing retrieval-augmented automated formalization methods suffer from semantic ambiguity in informal mathematical statements and lack of contextual grounding, hindering precise retrieval of requisite premises. This paper proposes DRIFT, a framework that explicitly decomposes complex propositions via **subproblem decomposition**, integrates an **adaptive retrieval mechanism**โ€”dynamically tailoring queries to the knowledge boundaries of different large language models (LLMs)โ€”and employs **example-guided formalization generation**, jointly optimizing premise matching and Lean formalization. DRIFT deeply unifies retrieval-augmented generation (RAG), mathematical library (e.g., mathlib) retrieval, and structured reasoning. On ProofNet, it achieves nearly 100% improvement in F1 score; on ConNF, BEq+@10 improves by up to 42.25%. Moreover, it significantly enhances cross-distribution generalization. DRIFT establishes a new, interpretable, and scalable paradigm for LLM-driven mathematical formalization.

Technology Category

Application Category

๐Ÿ“ Abstract
Automating the formalization of mathematical statements for theorem proving remains a major challenge for Large Language Models (LLMs). LLMs struggle to identify and utilize the prerequisite mathematical knowledge and its corresponding formal representation in languages like Lean. Current retrieval-augmented autoformalization methods query external libraries using the informal statement directly, but overlook a fundamental limitation: informal mathematical statements are often complex and offer limited context on the underlying math concepts. To address this, we introduce DRIFT, a novel framework that enables LLMs to decompose informal mathematical statements into smaller, more tractable ''sub-components''. This facilitates targeted retrieval of premises from mathematical libraries such as Mathlib. Additionally, DRIFT retrieves illustrative theorems to help models use premises more effectively in formalization tasks. We evaluate DRIFT across diverse benchmarks (ProofNet, ConNF, and MiniF2F-test) and find that it consistently improves premise retrieval, nearly doubling the F1 score compared to the DPR baseline on ProofNet. Notably, DRIFT demonstrates strong performance on the out-of-distribution ConNF benchmark, with BEq+@10 improvements of 37.14% and 42.25% using GPT-4.1 and DeepSeek-V3.1, respectively. Our analysis shows that retrieval effectiveness in mathematical autoformalization depends heavily on model-specific knowledge boundaries, highlighting the need for adaptive retrieval strategies aligned with each model's capabilities.
Problem

Research questions and friction points this paper is trying to address.

Automating mathematical statement formalization for theorem proving using LLMs
Identifying prerequisite mathematical knowledge and corresponding formal representations
Addressing limited context in informal mathematical statements for autoformalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes informal math statements into sub-components
Retrieves premises from math libraries using sub-components
Retrieves illustrative theorems to aid formalization process
๐Ÿ”Ž Similar Papers
No similar papers found.