SysMoBench: Evaluating AI on Formally Modeling Complex Real-World Systems

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of rigorous evaluation of large language models’ (LLMs) capability to formally model complex real-world systems—particularly concurrent and distributed systems. We introduce the first benchmark for assessing AI-driven formal modeling competence on large-scale, production-grade systems. Methodologically, we adopt TLA+ as the specification language and integrate static analysis, model checking, and runtime verification to enable automated, multi-dimensional assessment across four criteria: syntactic correctness, behavioral consistency, code-to-specification fidelity, and invariant validity. The benchmark encompasses nine representative system components—including Etcd, Redis, and Asterinas OS—and covers core concurrency abstractions such as the Raft consensus protocol and OS synchronization primitives. Experimental results expose systematic deficiencies in current LLMs’ ability to abstract deep concurrency semantics and rigorously preserve invariants. Our benchmark provides a reproducible empirical foundation and an extensible evaluation framework for AI-augmented formal methods research.

Technology Category

Application Category

📝 Abstract
Formal models are essential to specifying large, complex computer systems and verifying their correctness, but are notoriously expensive to write and maintain. Recent advances in generative AI show promise in generating certain forms of specifications. However, existing work mostly targets small code, not complete systems. It is unclear whether AI can deal with realistic system artifacts, as this requires abstracting their complex behavioral properties into formal models. We present SysMoBench, a benchmark that evaluates AI's ability to formally model large, complex systems. We focus on concurrent and distributed systems, which are keystones of today's critical computing infrastructures, encompassing operating systems and cloud infrastructure. We use TLA+, the it de facto specification language for concurrent and distributed systems, though the benchmark can be extended to other specification languages. We address the primary challenge of evaluating AI-generated models by automating metrics like syntactic and runtime correctness, conformance to system code, and invariant correctness. SysMoBench currently includes nine diverse system artifacts: the Raft implementation of Etcd and Redis, the Spinlock and Mutex in Asterinas OS, etc.; more artifacts are being actively added. SysMoBench enables us to understand the capabilities and limitations of today's LLMs and agents, putting tools in this area on a firm footing and opening up promising new research directions.
Problem

Research questions and friction points this paper is trying to address.

Evaluating AI's ability to formally model large complex systems
Assessing AI-generated formal specifications for concurrent distributed systems
Automating correctness metrics for AI-generated system behavior models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated metrics for evaluating AI-generated formal models
Benchmark for modeling concurrent and distributed systems
Using TLA+ specification language for system verification
🔎 Similar Papers
No similar papers found.