VeruSAGE: A Study of Agent-Based Verification for Rust Systems

📅 2025-12-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks and verification paradigms for formal verification of Rust system software are inadequate, hindering systematic evaluation of large language models (LLMs) in this domain. Method: We introduce VeruSAGE-Bench—the first Rust-specific, system-level formal verification benchmark comprising 849 proof tasks—and propose a multi-LLM collaborative agent architecture integrating task decomposition, tool invocation, and feedback-driven reasoning. We further establish an “LLM-agent co-verification paradigm” that emphasizes model adaptation to heterogeneous verification toolchains (e.g., Verus) and domain-specific reasoning strategies. Contribution/Results: Our best-performing agent ensemble—comprising o4-mini, GPT-5, and Sonnet 4.5 integrated with the Verus framework—achieves >80% overall task completion on VeruSAGE-Bench and exceeds 90% completion on high-difficulty, expert-legacy tasks. These results significantly outperform manual verification efficiency and demonstrate that LLMs, when properly orchestrated with domain-appropriate tools and strategies, can substantially advance automated formal verification of Rust systems.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown impressive capability to understand and develop code. However, their capability to rigorously reason about and prove code correctness remains in question. This paper offers a comprehensive study of LLMs' capability to develop correctness proofs for system software written in Rust. We curate a new system-verification benchmark suite, VeruSAGE-Bench, which consists of 849 proof tasks extracted from eight open-source Verus-verified Rust systems. Furthermore, we design different agent systems to match the strengths and weaknesses of different LLMs (o4-mini, GPT-5, Sonnet 4, and Sonnet 4.5). Our study shows that different tools and agent settings are needed to stimulate the system-verification capability of different types of LLMs. The best LLM-agent combination in our study completes over 80% of system-verification tasks in VeruSAGE-Bench. It also completes over 90% of a set of system proof tasks not part of VeruSAGE-Bench because they had not yet been finished by human experts. This result shows the great potential for LLM-assisted development of verified system software.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' ability to prove Rust code correctness
Creating a benchmark for system-verification tasks in Rust
Designing agent systems to enhance LLMs' verification performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Curated benchmark suite with 849 Rust verification tasks
Designed agent systems tailored to different LLM strengths
Best LLM-agent combination achieved over 80% task completion
🔎 Similar Papers
No similar papers found.