🤖 AI Summary
This study investigates how AI novices in public institutions leverage eXplainable AI (XAI) to support individual comprehension and collective deliberation—thereby advancing democratic AI deployment. Drawing on eight focus group interviews and twelve individual interviews, it provides the first systematic comparative analysis of XAI’s distinct mechanisms in individual understanding versus group-level deliberation. The study proposes a modular, four-category informational explanation framework specifically designed for collective deliberation and distills empirically grounded design principles for group interaction. Empirical findings indicate that structured explanations significantly enhance shared situational awareness and the generation of balanced pro/con arguments within groups; individuals, by contrast, excel at deep analytical interpretation but struggle to engage in constructive viewpoint exchange. These results offer both theoretical foundations and practical design guidance for XAI development and deliberative AI governance in the public sector.
📝 Abstract
Deploying AI systems in public institutions can have far-reaching consequences for many people, making it a matter of public interest. Providing opportunities for stakeholders to come together, understand these systems, and debate their merits and harms is thus essential. Explainable AI often focuses on individuals, but deliberation benefits from group settings, which are underexplored. To address this gap, we present findings from an interview study with 8 focus groups and 12 individuals. Our findings provide insight into how explanations support AI novices in deliberating alone and in groups. Participants used modular explanations with four information categories to solve tasks and decide about an AI system's deployment. We found that the explanations supported groups in creating shared understanding and in finding arguments for and against the system's deployment. In comparison, individual participants engaged with explanations in more depth and performed better in the study tasks, but missed an exchange with others. Based on our findings, we provide suggestions on how explanations should be designed to work in group settings and describe their potential use in real-world contexts. With this, our contributions inform XAI research that aims to enable AI novices to understand and deliberate AI systems in the public sector.