🤖 AI Summary
This work addresses the challenge of integrating human values—characterized by their ambiguity, diversity, and context-dependence—into requirements engineering for ethically aware autonomous systems. The authors propose a goal-oriented requirements engineering approach that formalizes human values into five actionable categories: Social, Legal, Ethical, Empathic, and Cultural (SLEEC), aligning them with functional and adaptive system goals. Through normative modeling, conflict detection algorithms, and a design-time negotiation mechanism, the method enables structured integration of value-based requirements, automated validation of their well-formedness, and early identification of conflicts. The feasibility and effectiveness of this approach are demonstrated in a case study on medical body-area sensor networks, where it significantly enhances the system’s ethical alignment during early design phases.
📝 Abstract
Operationalizing human values alongside functional and adaptation requirements remains challenging due to their ambiguous, pluralistic, and context-dependent nature. Explicit representations are needed to support the elicitation, analysis, and negotiation of value conflicts beyond traditional software engineering abstractions. In this work, we propose a requirements engineering approach for ethics-aware autonomous systems that captures human values as normative goals and aligns them with functional and adaptation goals. These goals are systematically operationalized into Social, Legal, Ethical, Empathetic, and Cultural (SLEEC) requirements, enabling automated well-formedness checking, conflict detection, and early design-time negotiation. We demonstrate the feasibility of the approach through a medical Body Sensor Network case study.