ARCH-COMP25 Category Report: Stochastic Models

📅 2025-06-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the lack of standardized benchmarks for stochastic model verification and policy synthesis in the ARCH-COMP’25 Friendly Competition. Method: We propose a novel evaluation framework featuring (i) the first scalable benchmark suite for water distribution systems, accompanied by a curated set of simplified instances; (ii) unified integration of probabilistic model checking (for MDPs and CTMCs), stochastic control theory, and formal specifications expressed in Signal Temporal Logic (STL) and Probabilistic Temporal Logic (PTL); and (iii) development and integration of three new analysis tools. Contributions/Results: We establish the first open-source benchmark library for stochastic model verification competitions; enable fair, reproducible, cross-team performance evaluation across seven international teams; and significantly improve comparability and reproducibility of verification results across diverse tools—thereby providing a scalable methodological foundation for future editions of the competition.

Technology Category

Application Category

📝 Abstract
This report is concerned with a friendly competition for formal verification and policy synthesis of stochastic models. The main goal of the report is to introduce new benchmarks and their properties within this category and recommend next steps toward next year's edition of the competition. In particular, this report introduces three recently developed software tools, a new water distribution network benchmark, and a collection of simplified benchmarks intended to facilitate further comparisons among tools that were previously not directly comparable. This friendly competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in Summer 2025.
Problem

Research questions and friction points this paper is trying to address.

Introduce new benchmarks for stochastic models verification
Compare software tools for stochastic model analysis
Propose next steps for future competition editions
Innovation

Methods, ideas, or system contributions that make the work stand out.

New benchmarks for stochastic models verification
Three recently developed software tools
Simplified benchmarks for tool comparison
🔎 Similar Papers
No similar papers found.
Alessandro Abate
Alessandro Abate
Professor of Verification and Control, University of Oxford, UK
Formal VerificationControl TheoryStochastic Hybrid SystemsCyber-Physical SystemsEnergy and Safety-Critical Systems
Omid Akbarzadeh
Omid Akbarzadeh
PhD student, Newcastle University
Safe Cyber-Physical SystemsFormal ControlCommunication Networks
H
Henk A.P. Blom
Delft University of Technology, Delft, The Netherlands
Sofie Haesaert
Sofie Haesaert
Electrical Engineering Department, TU Eindhoven
S
Sina Hassani
Aalborg University, Aalborg, Denmark
Abolfazl Lavaei
Abolfazl Lavaei
Assistant Professor, Newcastle University
Cyber-Physical SystemsLarge-Scale Stochastic NetworksFormal Learning & ControlSafe Autonomy & AIData-Driven Optimization
F
Frederik Baymler Mathiesen
Delft University of Technology, Delft, The Netherlands
Rahul Misra
Rahul Misra
Aalborg University
ControlOptimal ControlReinforcement learningGame theory
Amy Nejati
Amy Nejati
Assistant Professor, Newcastle University
Autonomous SystemsSafe & Secure CPSSafe Autonomy & AIData-Driven ControlFormal Methods
M
Mathis Niehage
University of Münster, Münster, Germany
F
Fie Ørum
Aalborg University, Aalborg, Denmark
Anne Remke
Anne Remke
Universität Münster
Critical InfrastructuresDependabilityModel Checking
Behrad Samari
Behrad Samari
Newcastle University, Newcastle upon Tyne, UK
Ruohan Wang
Ruohan Wang
Eindhoven University of Technology, Eindhoven, The Netherlands
R
Rafal Wisniewski
Aalborg University, Aalborg, Denmark
B
Ben Wooding
Newcastle University, Newcastle upon Tyne, UK
Mahdieh Zaker
Mahdieh Zaker
Newcastle University, Newcastle upon Tyne, UK