🤖 AI Summary
In automated theorem proving (ATP), white-box methods—though enabling intermediate state inspection and incremental reasoning—suffer from significantly lower efficiency compared to black-box large language model (LLM) approaches. To address this, we propose **State Factorization**, a novel white-box technique tailored for the Lean 4 environment: it decomposes complex proof states into independent, parallelizable substate branches, thereby enabling state reuse, context compression, and fine-grained error feedback. We further develop an integrated toolchain supporting parallel proof search, interactive validator integration, and training data augmentation. Experiments across multiple benchmarks demonstrate that our method consistently outperforms state-of-the-art black-box LLM-based ATP systems. This work constitutes the first systematic validation of factorized white-box paradigms for enhancing both the efficiency and controllability of formal proof search, establishing a new pathway toward reliable, LLM-augmented ATP.
📝 Abstract
Automated theorem proving (ATP) has been a classical problem in artificial intelligence since its inception, yet it remains challenging due to its vast state and action space. Large language models (LLMs) have recently emerged as a promising heuristic for ATP, but they lack correctness guarantees and thus require interaction with a proof verifier. Such interactions typically follow one of two approaches: black-box interaction, which does not utilize intermediate proof states, or white-box approaches, which allow for incremental proof construction and examination of intermediate states. While black-box approaches have directly benefited from recent LLM advances, white-box methods have comparatively lagged behind. In this paper, we address this gap by introducing LeanTree, which consists of (i) a tool built in the Lean 4 language that factorizes complex proof states into simpler, independent branches, and (ii) a dataset of these factorized intermediate states. Our white-box tooling offers several advantages over black-box approaches: it simplifies evaluation, reduces necessary context, generates richer training data, enables parallel search across multiple states, supports efficient reuse of states, and provides feedback in case of errors. Our preliminary results hint that white-box approaches outperform black-box alternatives in some settings.