A Scoresheet for Explainable AI

📅 2025-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing AI explainability standards are overly abstract and lack operational specificity, hindering transparency enhancement and trust establishment. Method: This paper proposes a structured, context-adaptive, multi-stakeholder Explainability Scoresheet that bridges high-level principles with engineering practice. Our approach integrates stakeholder requirement elicitation, cross-domain application validation, and standardized engineering workflows. Contribution/Results: The Scoresheet enables the first quantifiable decomposition and dynamic adaptation of explainability requirements across the AI system lifecycle—from specification to evaluation. Empirically validated in representative AI scenarios—including multi-agent systems—the framework demonstrates strong generalizability, operational feasibility, and practical guidance value for developers, auditors, and domain practitioners.

Technology Category

Application Category

📝 Abstract
Explainability is important for the transparency of autonomous and intelligent systems and for helping to support the development of appropriate levels of trust. There has been considerable work on developing approaches for explaining systems and there are standards that specify requirements for transparency. However, there is a gap: the standards are too high-level and do not adequately specify requirements for explainability. This paper develops a scoresheet that can be used to specify explainability requirements or to assess the explainability aspects provided for particular applications. The scoresheet is developed by considering the requirements of a range of stakeholders and is applicable to Multiagent Systems as well as other AI technologies. We also provide guidance for how to use the scoresheet and illustrate its generality and usefulness by applying it to a range of applications.
Problem

Research questions and friction points this paper is trying to address.

Developing a scoresheet for explainability requirements
Assessing explainability in AI applications
Addressing gaps in high-level transparency standards
Innovation

Methods, ideas, or system contributions that make the work stand out.

Develops explainability scoresheet
Applies to Multiagent Systems
Specifies explainability requirements
🔎 Similar Papers
No similar papers found.
M
M. Winikoff
Victoria University of Wellington, Welington, New Zealand
J
John Thangarajah
RMIT University, Melbourne, Australia
Sebastian Rodriguez
Sebastian Rodriguez
School of Computing Technologies, RMIT University
Artificial IntelligenceMultiagent SystemsHolonic Multiagent SystemsAgent ProgrammingAgent Oriented Software Engineering