A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In medical human-AI collaborative diagnosis, establishing clinician trust in AI predictions through explainable artificial intelligence (XAI) remains a critical barrier to clinical deployment. This study pioneers the explicit modeling of argumentative structures in XAI explanation generation and conducts a mixed-methods user study with practicing clinicians—including standardized questionnaire assessment and in-depth interviews—to systematically evaluate argumentative explanations (e.g., causal chains, counterfactual contrasts) across understandability, credibility, and clinical utility. Results identify “contestability” and “clinical consistency” as core dimensions clinicians use to assess explanations, reveal strong preferences for specific explanation types, and formulate argument-quality-driven design principles for medical XAI. The findings provide empirical evidence and actionable guidelines for developing high-trust, human-centered AI-assisted diagnostic systems.

Technology Category

Application Category

📝 Abstract
As the field of healthcare increasingly adopts artificial intelligence, it becomes important to understand which types of explanations increase transparency and empower users to develop confidence and trust in the predictions made by machine learning (ML) systems. In shared decision-making scenarios where doctors cooperate with ML systems to reach an appropriate decision, establishing mutual trust is crucial. In this paper, we explore different approaches to generating explanations in eXplainable AI (XAI) and make their underlying arguments explicit so that they can be evaluated by medical experts. In particular, we present the findings of a user study conducted with physicians to investigate their perceptions of various types of AI-generated explanations in the context of diagnostic decision support. The study aims to identify the most effective and useful explanations that enhance the diagnostic process. In the study, medical doctors filled out a survey to assess different types of explanations. Further, an interview was carried out post-survey to gain qualitative insights on the requirements of explanations incorporated in diagnostic decision support. Overall, the insights gained from this study contribute to understanding the types of explanations that are most effective.
Problem

Research questions and friction points this paper is trying to address.

Evaluating AI explanations for diagnostic decision support
Identifying effective explanations to enhance doctor-AI trust
Assessing physician perceptions of XAI in healthcare
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluating argumentative explanations in diagnostic AI
User study with physicians on AI explanations
Survey and interview to assess explanation effectiveness
🔎 Similar Papers
No similar papers found.
F
Felix Liedeker
Semantic Computing Group, CITEC, Bielefeld University, Bielefeld, Germany
O
Olivia Sanchez-Graillet
Semantic Computing Group, CITEC, Bielefeld University, Bielefeld, Germany
M
Moana Seidler
Ruhr-Epileptology, University Hospital Knappschaftskrankenhaus Bochum, Ruhr-University, Bochum, Germany
Christian Brandt
Christian Brandt
Bethel Epilepsy Centre, Mara Hospital, University Hospital for Epileptology, Bielefeld, Germany
Epilepsycomorbidityantiepileptic drugsintellectual disabilitypharmacokinetics
J
Jorg Wellmer
Ruhr-Epileptology, University Hospital Knappschaftskrankenhaus Bochum, Ruhr-University, Bochum, Germany
Philipp Cimiano
Philipp Cimiano
Professor for Computer Science, Bielefeld University
Semantic WebText MiningNatural Language Processing