Towards Continuous Assurance with Formal Verification and Assurance Cases

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Ensuring trustworthiness across the full lifecycle—design, operation, and evolution—of autonomous systems faces two key challenges: the fragmentation between design-time and run-time assurance, and insufficient adaptability to dynamic environmental and operational changes. This paper proposes a unified continuous assurance framework that integrates formal verification (via RoboChart), probabilistic risk analysis (using PRISM), and assurance case modeling through a model-driven approach, enabling co-modeling and dynamic updating of assurance artifacts. The framework supports automated traceability, reconstruction, and regeneration of assurance arguments, thereby establishing an end-to-end trust chain spanning design, operation, and evolution phases. An Eclipse plugin implements automated model transformation and argument generation. Evaluated on a nuclear inspection robot case study, the framework significantly enhances system trustworthiness, regulatory compliance, and alignment with tripartite AI principles—accountability, transparency, and robustness.

Technology Category

Application Category

📝 Abstract
Autonomous systems must sustain justified confidence in their correctness and safety across their operational lifecycle-from design and deployment through post-deployment evolution. Traditional assurance methods often separate development-time assurance from runtime assurance, yielding fragmented arguments that cannot adapt to runtime changes or system updates - a significant challenge for assured autonomy. Towards addressing this, we propose a unified Continuous Assurance Framework that integrates design-time, runtime, and evolution-time assurance within a traceable, model-driven workflow as a step towards assured autonomy. In this paper, we specifically instantiate the design-time phase of the framework using two formal verification methods: RoboChart for functional correctness and PRISM for probabilistic risk analysis. We also propose a model-driven transformation pipeline, implemented as an Eclipse plugin, that automatically regenerates structured assurance arguments whenever formal specifications or their verification results change, thereby ensuring traceability. We demonstrate our approach on a nuclear inspection robot scenario, and discuss its alignment with the Trilateral AI Principles, reflecting regulator-endorsed best practices.
Problem

Research questions and friction points this paper is trying to address.

Integrating design-time, runtime, and evolution-time assurance for autonomous systems
Automatically regenerating assurance arguments when formal specifications change
Ensuring traceability across operational lifecycle through model-driven workflow
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates formal verification methods for correctness
Automatically regenerates assurance arguments via transformation pipeline
Unifies design-time runtime evolution-time assurance framework
🔎 Similar Papers
No similar papers found.
D
Dhaminda B. Abeywickrama
Department of Computer Science, The University of Manchester, Manchester, UK
Michael Fisher
Michael Fisher
Professor of Computer Science, University of Manchester
Autonomous SystemsTrustworthinessFormal MethodsTemporal LogicModel Checking
F
Frederic Wheeler
Regulatory Support Directorate, Amentum, Warrington, UK
L
Louise A. Dennis
Department of Computer Science, The University of Manchester, Manchester, UK