🤖 AI Summary
Ensuring trustworthiness across the full lifecycle—design, operation, and evolution—of autonomous systems faces two key challenges: the fragmentation between design-time and run-time assurance, and insufficient adaptability to dynamic environmental and operational changes. This paper proposes a unified continuous assurance framework that integrates formal verification (via RoboChart), probabilistic risk analysis (using PRISM), and assurance case modeling through a model-driven approach, enabling co-modeling and dynamic updating of assurance artifacts. The framework supports automated traceability, reconstruction, and regeneration of assurance arguments, thereby establishing an end-to-end trust chain spanning design, operation, and evolution phases. An Eclipse plugin implements automated model transformation and argument generation. Evaluated on a nuclear inspection robot case study, the framework significantly enhances system trustworthiness, regulatory compliance, and alignment with tripartite AI principles—accountability, transparency, and robustness.
📝 Abstract
Autonomous systems must sustain justified confidence in their correctness and safety across their operational lifecycle-from design and deployment through post-deployment evolution. Traditional assurance methods often separate development-time assurance from runtime assurance, yielding fragmented arguments that cannot adapt to runtime changes or system updates - a significant challenge for assured autonomy. Towards addressing this, we propose a unified Continuous Assurance Framework that integrates design-time, runtime, and evolution-time assurance within a traceable, model-driven workflow as a step towards assured autonomy. In this paper, we specifically instantiate the design-time phase of the framework using two formal verification methods: RoboChart for functional correctness and PRISM for probabilistic risk analysis. We also propose a model-driven transformation pipeline, implemented as an Eclipse plugin, that automatically regenerates structured assurance arguments whenever formal specifications or their verification results change, thereby ensuring traceability. We demonstrate our approach on a nuclear inspection robot scenario, and discuss its alignment with the Trilateral AI Principles, reflecting regulator-endorsed best practices.