🤖 AI Summary
This work addresses the limitation of current large language models (LLMs) in evaluating formal proofs within Lean, as existing benchmarks are largely confined to Mathlib and fail to capture the complex project structures and cross-file dependencies prevalent in real-world software verification. To bridge this gap, we introduce VeriSoftBench, the first repository-scale benchmark for software verification, comprising 500 Lean 4 proof tasks derived from open-source formalization projects, with original dependency closures and code structures fully preserved. Experimental results demonstrate that state-of-the-art models perform significantly worse on VeriSoftBench than on Mathlib tasks, with proof success rates declining as dependency complexity increases. While providing distilled context partially mitigates the issue, substantial bottlenecks remain in cross-file and multi-hop reasoning capabilities.
📝 Abstract
Large language models have achieved striking results in interactive theorem proving, particularly in Lean. However, most benchmarks for LLM-based proof automation are drawn from mathematics in the Mathlib ecosystem, whereas proofs in software verification are developed inside definition-rich codebases with substantial project-specific libraries. We introduce VeriSoftBench, a benchmark of 500 Lean 4 proof obligations drawn from open-source formal-methods developments and packaged to preserve realistic repository context and cross-file dependencies. Our evaluation of frontier LLMs and specialized provers yields three observations. First, provers tuned for Mathlib-style mathematics transfer poorly to this repository-centric setting. Second, success is strongly correlated with transitive repository dependence: tasks whose proofs draw on large, multi-hop dependency closures are less likely to be solved. Third, providing curated context restricted to a proof's dependency closure improves performance relative to exposing the full repository, but nevertheless leaves substantial room for improvement. Our benchmark and evaluation suite are released at https://github.com/utopia-group/VeriSoftBench.