🤖 AI Summary
Existing AI explainability standards are overly abstract and lack operational specificity, hindering transparency enhancement and trust establishment. Method: This paper proposes a structured, context-adaptive, multi-stakeholder Explainability Scoresheet that bridges high-level principles with engineering practice. Our approach integrates stakeholder requirement elicitation, cross-domain application validation, and standardized engineering workflows. Contribution/Results: The Scoresheet enables the first quantifiable decomposition and dynamic adaptation of explainability requirements across the AI system lifecycle—from specification to evaluation. Empirically validated in representative AI scenarios—including multi-agent systems—the framework demonstrates strong generalizability, operational feasibility, and practical guidance value for developers, auditors, and domain practitioners.
📝 Abstract
Explainability is important for the transparency of autonomous and intelligent systems and for helping to support the development of appropriate levels of trust. There has been considerable work on developing approaches for explaining systems and there are standards that specify requirements for transparency. However, there is a gap: the standards are too high-level and do not adequately specify requirements for explainability. This paper develops a scoresheet that can be used to specify explainability requirements or to assess the explainability aspects provided for particular applications. The scoresheet is developed by considering the requirements of a range of stakeholders and is applicable to Multiagent Systems as well as other AI technologies. We also provide guidance for how to use the scoresheet and illustrate its generality and usefulness by applying it to a range of applications.