The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPR

📅 2025-01-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the practical legal requirements—regarding GDPR compliance, interpretability, and operational utility—of explainable artificial intelligence (XAI) in credit decision-making, as articulated by legal experts, and identifies tensions among these objectives. Method: Drawing on an online survey and in-depth interviews with European legal practitioners, the research employs grounded-theory-driven qualitative coding and legal empirical analysis to systematically characterize expert expectations. Contribution/Results: It uncovers a hierarchical structure of explanation expectations—including individual contestability, transparency thresholds, and explanation granularity—previously undocumented in XAI-GDPR literature. Findings reveal widespread deficiencies in current XAI explanations, particularly insufficient comprehensibility and omission of legally salient information. The study proposes a GDPR-aligned explanation presentation framework, principles for explanation content selection, and a pathway to ensure contestability. It delivers 12 actionable technical recommendations for developers and identifies six core legal themes, providing interdisciplinary, evidence-based guidance for XAI deployment in regulated financial contexts.

Technology Category

Application Category

📝 Abstract
Explainable AI (XAI) provides methods to understand non-interpretable machine learning models. However, we have little knowledge about what legal experts expect from these explanations, including their legal compliance with, and value against European Union legislation. To close this gap, we present the Explanation Dialogues, an expert focus study to uncover the expectations, reasoning, and understanding of legal experts and practitioners towards XAI, with a specific focus on the European General Data Protection Regulation. The study consists of an online questionnaire and follow-up interviews, and is centered around a use-case in the credit domain. We extract both a set of hierarchical and interconnected codes using grounded theory, and present the standpoints of the participating experts towards XAI. We find that the presented explanations are hard to understand and lack information, and discuss issues that can arise from the different interests of the data controller and subject. Finally, we present a set of recommendations for developers of XAI methods, and indications of legal areas of discussion. Among others, recommendations address the presentation, choice, and content of an explanation, technical risks as well as the end-user, while we provide legal pointers to the contestability of explanations, transparency thresholds, intellectual property rights as well as the relationship between involved parties.
Problem

Research questions and friction points this paper is trying to address.

GDPR
XAI
Legal Framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

XAI Legal Framework
GDPR Compliance
Expert Insights in Law and AI
Laura State
Laura State
Alexander von Humboldt Institute for Internet and Society
auditingexplainable AIartificial intelligence
A
A. Colmenarejo
Southampton Law School, University of Southampton, University Road, Southampton, SO17 1BJ, United Kingdom; ISTI-CNR, Via G. Moruzzi, 1, Pisa, 56124, Italy
Andrea Beretta
Andrea Beretta
CNR - ISTI -
Human Computer InteractionHCIDecision MakingHuman-Centered AI
Salvatore Ruggieri
Salvatore Ruggieri
Università di Pisa
Computer Science
Franco Turini
Franco Turini
Professor of Computer Science, University of Pisa, Italy
data miningartificial intelligencedatabaseprivacy
S
Stephanie Law
Southampton Law School, University of Southampton, University Road, Southampton, SO17 1BJ, United Kingdom