🤖 AI Summary
This work addresses three fundamental challenges in model checking—insufficient precision, state-space explosion, and spurious counterexamples—by systematically reducing model checking to program verification. Methodologically, we design MOKA, a domain-specific language that encodes ACTL and universal μ-calculus formulas as programs; within an abstract interpretation framework, we construct a Kleene algebraic semantic model and introduce locally complete abstractions coupled with counterexample-guided dynamic domain refinement, synergistically combining under-approximation and abstraction for controllable precision enhancement. Our contributions are threefold: (1) the first rigorous reduction of model checking to program verification under abstract interpretation; (2) support for non-partitioning abstractions, significantly reducing false positives; and (3) theoretical guarantees for complete detection of violating initial states. The resulting analyzer is general-purpose and precision-tunable.
📝 Abstract
Abstract interpretation offers a powerful toolset for static analysis, tackling precision, complexity and state-explosion issues. In the literature, state partitioning abstractions based on (bi)simulation and property-preserving state relations have been successfully applied to abstract model checking. Here, we pursue a different track in which model checking is seen as an instance of program verification. To this purpose, we introduce a suitable language-called MOKA (for MOdel checking as abstract interpretation of Kleene Algebras)-which is used to encode temporal formulae as programs. In particular, we show that (universal fragments of) temporal logics, such as ACTL or, more generally, universal mu-calculus can be transformed into MOKA programs. Such programs return all and only the initial states which violate the formula. By applying abstract interpretation to MOKA programs, we pave the way for reusing more general abstractions than partitions as well as for tuning the precision of the abstraction to remove or avoid false alarms. We show how to perform model checking via a program logic that combines under-approximation and abstract interpretation analysis to avoid false alarms. The notion of locally complete abstraction is used to dynamically improve the analysis precision via counterexample-guided domain refinement.