Neural Theorem Proving for Verification Conditions: A Real-World Benchmark

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automated verification condition (VC) proving remains a critical bottleneck in program verification, as existing automated theorem provers struggle with the complexity of real-world VCs, often necessitating substantial manual intervention. This work proposes NTP4VC, the first neural theorem proving benchmark tailored to real-world, multi-language VCs. Leveraging industrial-grade toolchains such as Why3 and Frama-C, the benchmark derives semantically equivalent formal test cases in Isabelle, Lean, and Rocq from large-scale software projects including Linux and Contiki-OS. The study systematically evaluates both general-purpose and theorem-proving-fine-tuned large language models on this benchmark. Experimental results indicate that while large language models show initial promise in VC proving, they still face significant challenges, thereby highlighting a crucial research opportunity for advancing automated reasoning in program verification.

Technology Category

Application Category

📝 Abstract
Theorem proving is fundamental to program verification, where the automated proof of Verification Conditions (VCs) remains a primary bottleneck. Real-world program verification frequently encounters hard VCs that existing Automated Theorem Provers (ATPs) cannot prove, leading to a critical need for extensive manual proofs that burden practical application. While Neural Theorem Proving (NTP) has achieved significant success in mathematical competitions, demonstrating the potential of machine learning approaches to formal reasoning, its application to program verification--particularly VC proving--remains largely unexplored. Despite existing work on annotation synthesis and verification-related theorem proving, no benchmark has specifically targeted this fundamental bottleneck: automated VC proving. This work introduces Neural Theorem Proving for Verification Conditions (NTP4VC), presenting the first real-world multi-language benchmark for this task. From real-world projects such as Linux and Contiki-OS kernel, our benchmark leverages industrial pipelines (Why3 and Frama-C) to generate semantically equivalent test cases across formal languages of Isabelle, Lean, and Rocq. We evaluate large language models (LLMs), both general-purpose and those fine-tuned for theorem proving, on NTP4VC. Results indicate that although LLMs show promise in VC proving, significant challenges remain for program verification, highlighting a large gap and opportunity for future research.
Problem

Research questions and friction points this paper is trying to address.

Verification Conditions
Automated Theorem Proving
Program Verification
Neural Theorem Proving
Formal Reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural Theorem Proving
Verification Conditions
Program Verification
Formal Methods
Large Language Models
🔎 Similar Papers
No similar papers found.