Better Together? The Role of Explanations in Supporting Novices in Individual and Collective Deliberations about AI

📅 2024-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how AI novices in public institutions leverage eXplainable AI (XAI) to support individual comprehension and collective deliberation—thereby advancing democratic AI deployment. Drawing on eight focus group interviews and twelve individual interviews, it provides the first systematic comparative analysis of XAI’s distinct mechanisms in individual understanding versus group-level deliberation. The study proposes a modular, four-category informational explanation framework specifically designed for collective deliberation and distills empirically grounded design principles for group interaction. Empirical findings indicate that structured explanations significantly enhance shared situational awareness and the generation of balanced pro/con arguments within groups; individuals, by contrast, excel at deep analytical interpretation but struggle to engage in constructive viewpoint exchange. These results offer both theoretical foundations and practical design guidance for XAI development and deliberative AI governance in the public sector.

Technology Category

Application Category

📝 Abstract
Deploying AI systems in public institutions can have far-reaching consequences for many people, making it a matter of public interest. Providing opportunities for stakeholders to come together, understand these systems, and debate their merits and harms is thus essential. Explainable AI often focuses on individuals, but deliberation benefits from group settings, which are underexplored. To address this gap, we present findings from an interview study with 8 focus groups and 12 individuals. Our findings provide insight into how explanations support AI novices in deliberating alone and in groups. Participants used modular explanations with four information categories to solve tasks and decide about an AI system's deployment. We found that the explanations supported groups in creating shared understanding and in finding arguments for and against the system's deployment. In comparison, individual participants engaged with explanations in more depth and performed better in the study tasks, but missed an exchange with others. Based on our findings, we provide suggestions on how explanations should be designed to work in group settings and describe their potential use in real-world contexts. With this, our contributions inform XAI research that aims to enable AI novices to understand and deliberate AI systems in the public sector.
Problem

Research questions and friction points this paper is trying to address.

Role of explanations in AI novices' individual and group deliberations
Effectiveness of modular explanations for shared understanding in groups
Designing explanations for AI system deployment debates in public sector
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular explanations with four information categories
Support groups in creating shared understanding
Design explanations for effective group settings
🔎 Similar Papers
No similar papers found.
T
Timothée Schmude
University of Vienna, Faculty of Computer Science, Research Network Data Science, Doctoral School Computer Science, Austria
Laura Koesten
Laura Koesten
University of Vienna
Human Data Interaction (data discoverydata reusedata visualizationsensemaking)
T
Torsten Moller
University of Vienna, Faculty of Computer Science, Research Group Visualization and Data Analysis, Research Network Data Science, Austria
Sebastian Tschiatschek
Sebastian Tschiatschek
University of Vienna
Machine LearningReinforcement LearningInteractive Machine LearningProbabilistic ModelsExplainable AI