"Even explanations will not help in trusting [this] fundamentally biased system": A Predictive Policing Case-Study

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether multimodal explanations—textual, visual, or hybrid—can foster “appropriate trust” (i.e., trust calibrated to a high-risk AI predictive policing system’s actual capabilities) and examines how user expertise (retired police officers vs. lay public) moderates this effect. Method: A human-AI interaction experiment was conducted, integrating validated trust scales, behavioral decision analysis, and cross-group comparisons. Contribution/Results: All explanation modalities failed to correct trust miscalibration induced by the system’s inherent biases. Although hybrid explanations increased experts’ subjective trust, they did not improve objective decision calibration. This work provides the first empirical evidence that explainability alone cannot compensate for foundational model bias—directly challenging the prevailing “explanation-as-trust-enhancer” assumption. It argues that appropriate trust in high-stakes AI must be engineered *a priori* through responsible system design, rather than retrofitted via post-hoc explanations.

Technology Category

Application Category

📝 Abstract
In today's society, where Artificial Intelligence (AI) has gained a vital role, concerns regarding user's trust have garnered significant attention. The use of AI systems in high-risk domains have often led users to either under-trust it, potentially causing inadequate reliance or over-trust it, resulting in over-compliance. Therefore, users must maintain an appropriate level of trust. Past research has indicated that explanations provided by AI systems can enhance user understanding of when to trust or not trust the system. However, the utility of presentation of different explanations forms still remains to be explored especially in high-risk domains. Therefore, this study explores the impact of different explanation types (text, visual, and hybrid) and user expertise (retired police officers and lay users) on establishing appropriate trust in AI-based predictive policing. While we observed that the hybrid form of explanations increased the subjective trust in AI for expert users, it did not led to better decision-making. Furthermore, no form of explanations helped build appropriate trust. The findings of our study emphasize the importance of re-evaluating the use of explanations to build [appropriate] trust in AI based systems especially when the system's use is questionable. Finally, we synthesize potential challenges and policy recommendations based on our results to design for appropriate trust in high-risk based AI-based systems.
Problem

Research questions and friction points this paper is trying to address.

Explores impact of explanation types on AI trust in predictive policing
Examines if explanations improve trust and decision-making for experts and lay users
Highlights challenges in building appropriate trust in biased high-risk AI systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid explanations boost expert trust
No explanation ensures appropriate trust
Re-evaluate explanations in high-risk AI
🔎 Similar Papers
No similar papers found.
Siddharth Mehrotra
Siddharth Mehrotra
Postdoctoral researcher @ University of Amsterdam & TU Delft
AITrustHuman Computer Interaction
Ujwal Gadiraju
Ujwal Gadiraju
Associate Professor, Delft University of Technology
Human-centered AIHuman-AI InteractionCrowd ComputingHuman ComputationInformation Retrieval
E
Eva Bittner
University of Hamburg
F
Folkert van Delden
Delft University of Technology
C
C. Jonker
Delft University of Technology & LIACS, Leiden University
M
Myrthe L. Tielman
Delft University of Technology