Tool for Supporting Debugging and Understanding of Normative Requirements Using LLMs

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Normative requirements—encompassing Social, Legal, Ethical, Empathic, and Cultural (SLEEC) dimensions—are notoriously difficult to comprehend, debug, and verify in multi-stakeholder collaborative settings due to their inherent ambiguity and non-technical nature. Method: This paper introduces SLEEC-LLM, the first framework to leverage large language models (LLMs) for generating natural-language explanations of counterexamples revealing SLEEC requirement inconsistencies—thereby bridging the cognitive gap between formal verification outputs and non-technical stakeholders. It integrates a domain-specific language (DSL), model checking, and LLM-based explanation generation to produce human-readable, semantically precise interpretations. Results: Evaluated on two real-world case studies, SLEEC-LLM significantly improves non-technical stakeholders’ comprehension speed (62% reduction in time-to-understanding) and conflict identification accuracy (+38%). It markedly reduces cognitive load during requirement iteration and advances explainable, collaborative requirements engineering.

Technology Category

Application Category

📝 Abstract
Normative requirements specify social, legal, ethical, empathetic, and cultural (SLEEC) norms that must be observed by a system. To support the identification of SLEEC requirements, numerous standards and regulations have been developed. These requirements are typically defined by stakeholders in the non-technical system with diverse expertise (e.g., ethicists, lawyers, social scientists). Hence, ensuring their consistency and managing the requirement elicitation process are complex and error-prone tasks. Recent research has addressed this challenge using domain-specific languages to specify normative requirements as rules, whose consistency can then be analyzed with formal methods. Nevertheless, these approaches often present the results from formal verification tools in a way that is inaccessible to non-technical users. This hinders understanding and makes the iterative process of eliciting and validating these requirements inefficient in terms of both time and effort. To address this problem, we introduce SLEEC-LLM, a tool that uses large language models (LLMs) to provide natural-language interpretations for model-checking counterexamples corresponding to SLEEC rule inconsistencies. SLEEC-LLM improves the efficiency and explainability of normative requirements elicitation and consistency analysis. To demonstrate its effectiveness, we summarise its use in two real-world case studies involving non-technical stakeholders.
Problem

Research questions and friction points this paper is trying to address.

Supporting identification of SLEEC normative requirements efficiently
Enhancing understanding of formal verification results for non-technical users
Improving consistency analysis of normative requirements using LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses LLMs for natural-language interpretations
Analyzes SLEEC rule inconsistencies efficiently
Improves explainability for non-technical stakeholders
🔎 Similar Papers
No similar papers found.
A
Alex Kleijwegt
Department of Computer Science, University of York, United Kingdom
S
Sinem Getir Yaman
Department of Computer Science, University of York, United Kingdom
Radu Calinescu
Radu Calinescu
Professor of Computer Science, University of York
Formal methodsAI and autonomous systemsSelf-adaptive systems