Agentic Program Verification

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of automated verification for AI-generated code, this paper introduces AutoRocq—the first LLM-based agent framework tailored for program verification. AutoRocq enables tight collaboration between a large language model and Rocq, a Coq-derived interactive theorem prover, establishing a closed-loop “generate–verify–refine” workflow: the agent autonomously invokes Rocq to construct formal proofs and iteratively refines its strategy using proof feedback, without requiring large-scale supervised training. Its key innovations are a lightweight online learning mechanism and an interpretable, co-reasoning paradigm that integrates symbolic reasoning with neural guidance. Experiments on the SV-COMP benchmark and real-world Linux kernel modules demonstrate that AutoRocq substantially improves automation, verification success rate, and trustworthiness of AI-generated code—offering a novel pathway toward reliable deployment of AI-assisted programming.

Technology Category

Application Category

📝 Abstract
Automatically generated code is gaining traction recently, owing to the prevalence of Large Language Models (LLMs). Further, the AlphaProof initiative has demonstrated the possibility of using AI for general mathematical reasoning. Reasoning about computer programs (software) can be accomplished via general mathematical reasoning; however, it tends to be more structured and richer in contexts. This forms an attractive proposition, since then AI agents can be used to reason about voluminous code that gets generated by AI. In this work, we present a first LLM agent, AutoRocq, for conducting program verification. Unlike past works, which rely on extensive training of LLMs on proof examples, our agent learns on-the-fly and improves the proof via an iterative refinement loop. The iterative improvement of the proof is achieved by the proof agent communicating with the Rocq (formerly Coq) theorem prover to get additional context and feedback. The final result of the iteration is a proof derivation checked by the Rocq theorem prover. In this way, our proof construction involves autonomous collaboration between the proof agent and the theorem prover. This autonomy facilitates the search for proofs and decision-making in deciding on the structure of the proof tree. Experimental evaluation on SV-COMP benchmarks and on Linux kernel modules shows promising efficacy in achieving automated program verification. As automation in code generation becomes more widespread, we posit that our proof agent can be potentially integrated with AI coding agents to achieve a generate and validate loop, thus moving closer to the vision of trusted automatic programming.
Problem

Research questions and friction points this paper is trying to address.

Automated verification of AI-generated code using LLM agents
Iterative proof refinement via agent-theorem prover collaboration
Achieving trusted automatic programming through generate-validate loops
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM agent learns on-the-fly for program verification
Iterative refinement loop improves proof via theorem prover
Autonomous collaboration between agent and theorem prover
🔎 Similar Papers
No similar papers found.
H
Haoxin Tu
National University of Singapore, Singapore
H
Huan Zhao
National University of Singapore, Singapore
Y
Yahui Song
National University of Singapore, Singapore
M
Mehtab Zafar
National University of Singapore, Singapore
Ruijie Meng
Ruijie Meng
National University of Singapore
Software SecurityProgram Analysis
Abhik Roychoudhury
Abhik Roychoudhury
Professor of Computer Science, National University of Singapore
Program AnalysisSoftware SecurityAI Agents