🤖 AI Summary
Normative requirements—encompassing Social, Legal, Ethical, Empathic, and Cultural (SLEEC) dimensions—are notoriously difficult to comprehend, debug, and verify in multi-stakeholder collaborative settings due to their inherent ambiguity and non-technical nature. Method: This paper introduces SLEEC-LLM, the first framework to leverage large language models (LLMs) for generating natural-language explanations of counterexamples revealing SLEEC requirement inconsistencies—thereby bridging the cognitive gap between formal verification outputs and non-technical stakeholders. It integrates a domain-specific language (DSL), model checking, and LLM-based explanation generation to produce human-readable, semantically precise interpretations. Results: Evaluated on two real-world case studies, SLEEC-LLM significantly improves non-technical stakeholders’ comprehension speed (62% reduction in time-to-understanding) and conflict identification accuracy (+38%). It markedly reduces cognitive load during requirement iteration and advances explainable, collaborative requirements engineering.
📝 Abstract
Normative requirements specify social, legal, ethical, empathetic, and cultural (SLEEC) norms that must be observed by a system. To support the identification of SLEEC requirements, numerous standards and regulations have been developed. These requirements are typically defined by stakeholders in the non-technical system with diverse expertise (e.g., ethicists, lawyers, social scientists). Hence, ensuring their consistency and managing the requirement elicitation process are complex and error-prone tasks. Recent research has addressed this challenge using domain-specific languages to specify normative requirements as rules, whose consistency can then be analyzed with formal methods. Nevertheless, these approaches often present the results from formal verification tools in a way that is inaccessible to non-technical users. This hinders understanding and makes the iterative process of eliciting and validating these requirements inefficient in terms of both time and effort. To address this problem, we introduce SLEEC-LLM, a tool that uses large language models (LLMs) to provide natural-language interpretations for model-checking counterexamples corresponding to SLEEC rule inconsistencies. SLEEC-LLM improves the efficiency and explainability of normative requirements elicitation and consistency analysis. To demonstrate its effectiveness, we summarise its use in two real-world case studies involving non-technical stakeholders.