Evaluating Actionability in Explainable AI

📅 2026-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses a critical gap in explainable artificial intelligence (XAI) research: the lack of systematic linkage between explanation content and the concrete actions it prompts users to take, which hinders the evaluation of XAI’s real-world utility. Through in-depth interviews with 14 end-users from education and healthcare domains, the authors employ thematic analysis and categorization modeling to construct the first mapping catalog that connects 12 types of explanatory information to 60 corresponding user actions. This catalog not only elucidates actionable pathways between XAI explanations and user behavior but also provides developers with a concrete toolset for designing, articulating, and validating the practical utility of explanation mechanisms, thereby expanding the action-oriented design space for XAI systems.

Technology Category

Application Category

📝 Abstract
A core assumption of Explainable AI (XAI) is that explanations are useful to users -- that is, users will do something with the explanations. Prior work, however, does not clearly connect the information provided in explanations to user actions to evaluate effectiveness. In this paper, we articulate this connection. We conducted a formative study through 14 interviews with end users in education and medicine. We contribute a catalog of information and associated actions. Our catalog maps 12 categories of information that participants described relying on to take 60 different actions. We show how AI Creators can use the catalog's specificity and breadth to articulate how they expect information in their explanations to lead to user actions and test their assumptions. We use an exemplar XAI system to illustrate this approach. We conclude by discussing how our catalog expands the design space for XAI systems to support actionability.
Problem

Research questions and friction points this paper is trying to address.

Explainable AI
actionability
user actions
explanations
XAI evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable AI
Actionability
User Actions
Explanation Design
Formative Study
🔎 Similar Papers
No similar papers found.
Gennie Mansi
Gennie Mansi
Georgia Institute of Technology
Computer ScienceHuman-Centered Explainable AI
J
Julia Kim
Georgia Institute of Technology, Atlanta GA 30332, USA
M
Mark O. Riedl
Georgia Institute of Technology, Atlanta GA 30332, USA