π€ AI Summary
Current AI agents lack verifiable mechanisms to align with social, legal, ethical, empathetic, and cultural (SLEEC) norms in high-stakes domains. This work proposes the first systematic framework that operationalizes abstract SLEEC principles into engineerable specifications by integrating norm modeling, requirements engineering, formal verification, and interdisciplinary evaluation, thereby enabling traceable and verifiable implementation of these norms. By bridging the gap between high-level value principles and technical realization, the framework establishes a generalizable pathway for operationalizing SLEEC norms, offering both theoretical foundations and practical guidance for developing trustworthy AI systems that align with human values.
π Abstract
As AI agents are increasingly used in high-stakes domains like healthcare and law enforcement, aligning their behaviour with social, legal, ethical, empathetic, and cultural (SLEEC) norms has become a critical engineering challenge. While international frameworks have established high-level normative principles for AI, a significant gap remains in translating these abstract principles into concrete, verifiable requirements. To address this gap, we propose a systematic SLEEC-norm operationalisation process for determining, validating, implementing, and verifying normative requirements. Furthermore, we survey the landscape of methods and tools supporting this process, and identify key remaining challenges and research avenues for addressing them. We thus establish a framework - and define a research and policy agenda - for developing AI agents that are not only functionally useful but also demonstrably aligned with human norms and values.