🤖 AI Summary
Automated formal verification of large-scale hardware-software systems faces bottlenecks in specification generation scalability and cross-procedural invariant inference. This paper proposes Preguss, a framework that (1) identifies fine-grained, modular verification units guided by potential runtime errors; (2) synergistically integrates static analysis with large language models (LLMs) to enable LLM-assisted cross-procedural specification synthesis and refinement; and (3) coordinates static analysis and deductive verification via adaptive scheduling to improve verification efficiency. Its core innovation lies in deeply embedding LLMs within the verification feedback loop: error-driven unit decomposition and modular collaboration overcome LLM context-length limitations, significantly enhancing specification automation and system-level scalability. Preguss thus establishes a viable pathway toward fully automated, end-to-end formal verification of large-scale software systems.
📝 Abstract
Fully automated verification of large-scale software and hardware systems is arguably the holy grail of formal methods. Large language models (LLMs) have recently demonstrated their potential for enhancing the degree of automation in formal verification by, e.g., generating formal specifications as essential to deductive verification, yet exhibit poor scalability due to context-length limitations and, more importantly, the difficulty of inferring complex, interprocedural specifications. This paper outlines Preguss - a modular, fine-grained framework for automating the generation and refinement of formal specifications. Preguss synergizes between static analysis and deductive verification by orchestrating two components: (i) potential runtime error (RTE)-guided construction and prioritization of verification units, and (ii) LLM-aided synthesis of interprocedural specifications at the unit level. We envisage that Preguss paves a compelling path towards the automated verification of large-scale programs.