🤖 AI Summary
This study addresses a critical gap in explainable artificial intelligence (XAI) research: the lack of systematic linkage between explanation content and the concrete actions it prompts users to take, which hinders the evaluation of XAI’s real-world utility. Through in-depth interviews with 14 end-users from education and healthcare domains, the authors employ thematic analysis and categorization modeling to construct the first mapping catalog that connects 12 types of explanatory information to 60 corresponding user actions. This catalog not only elucidates actionable pathways between XAI explanations and user behavior but also provides developers with a concrete toolset for designing, articulating, and validating the practical utility of explanation mechanisms, thereby expanding the action-oriented design space for XAI systems.
📝 Abstract
A core assumption of Explainable AI (XAI) is that explanations are useful to users -- that is, users will do something with the explanations. Prior work, however, does not clearly connect the information provided in explanations to user actions to evaluate effectiveness. In this paper, we articulate this connection. We conducted a formative study through 14 interviews with end users in education and medicine. We contribute a catalog of information and associated actions. Our catalog maps 12 categories of information that participants described relying on to take 60 different actions. We show how AI Creators can use the catalog's specificity and breadth to articulate how they expect information in their explanations to lead to user actions and test their assumptions. We use an exemplar XAI system to illustrate this approach. We conclude by discussing how our catalog expands the design space for XAI systems to support actionability.