Supporting Software Formal Verification with Large Language Models: An Experimental Study

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Natural language requirements are ill-suited for direct use in formal verification. Method: This work proposes an automated property generation framework integrating large language models (LLMs) with formal verification tools. It introduces an assertion generation mechanism extending beyond Linear Temporal Logic (LTL) to support numerical constraints and compositional system behavior modeling; integrates Claude 3.5 Sonnet with the ESBMC bounded model checker; and employs human-in-the-loop supervision to calibrate output quality. Contribution/Results: We first identify and characterize systematic impacts of LLM-induced model connection errors and numerical approximations on verification outcomes—reducing false positives and uncovering previously overlooked falsifiable scenarios. Evaluated on nine cyber-physical systems from Lockheed Martin, our approach achieves 46.5% verification accuracy—on par with NASA’s CoCoSim—while substantially lowering the barrier to formal verification and enhancing defect detection capability.

Technology Category

Application Category

📝 Abstract
Formal methods have been employed for requirements verification for a long time. However, it is difficult to automatically derive properties from natural language requirements. SpecVerify addresses this challenge by integrating large language models (LLMs) with formal verification tools, providing a more flexible mechanism for expressing requirements. This framework combines Claude 3.5 Sonnet with the ESBMC verifier to form an automated workflow. Evaluated on nine cyber-physical systems from Lockheed Martin, SpecVerify achieves 46.5% verification accuracy, comparable to NASA's CoCoSim, but with lower false positives. Our framework formulates assertions that extend beyond the expressive power of LTL and identifies falsifiable cases that are missed by more traditional methods. Counterexample analysis reveals CoCoSim's limitations stemming from model connection errors and numerical approximation issues. While SpecVerify advances verification automation, our comparative study of Claude, ChatGPT, and Llama shows that high-quality requirements documentation and human monitoring remain critical, as models occasionally misinterpret specifications. Our results demonstrate that LLMs can significantly reduce the barriers to formal verification, while highlighting the continued importance of human-machine collaboration in achieving optimal results.
Problem

Research questions and friction points this paper is trying to address.

Automate deriving properties from natural language requirements
Enhance formal verification with LLM integration
Reduce false positives in software verification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates Claude 3.5 Sonnet with ESBMC verifier
Generates assertions beyond LTL expressive power
Reduces formal verification barriers via LLMs
🔎 Similar Papers
No similar papers found.