Is Trust Correlated With Explainability in AI? A Meta-Analysis

📅 2025-04-16
🏛️ IEEE Transactions on Technology and Society
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study empirically examines the relationship between explainable artificial intelligence (XAI) and user trust, challenging the widely held assumption that explainability inherently enhances trust. Method: A meta-analysis of 90 independent studies was conducted, employing Hedges’ *g* for effect-size estimation, *I²* heterogeneity tests, and subgroup and moderator analyses to assess contextual influences. Contribution/Results: Results reveal a statistically significant yet modest positive correlation (*r* ≈ 0.32) between XAI and trust, with effect sizes substantially moderated by application domain, explanation type, and user expertise. This is the first large-scale quantitative demonstration that XAI’s empirical contribution to trust is limited. The study introduces a critical conceptual distinction between *trust perception*—a transient, psychological response—and *trustworthiness*, referring to sustainable, institutionally grounded AI credibility. Findings establish an evidence-based benchmark for XAI design in high-stakes domains (e.g., healthcare, criminal justice) and inform ethically grounded implementation frameworks.

Technology Category

Application Category

📝 Abstract
This study critically examines the commonly held assumption that explicability in artificial intelligence (AI) systems inherently boosts user trust. Utilizing a meta-analytical approach, we conducted a comprehensive examination of the existing literature to explore the relationship between AI explainability and trust. Our analysis, incorporating data from 90 studies, reveals a statistically significant but moderate positive correlation between the explainability of AI systems and the trust they engender among users. This indicates that while explainability contributes to building trust, it is not the sole or predominant factor in this equation. In addition to academic contributions to the field of Explainable AI (XAI), this research highlights its broader socio-technical implications, particularly in promoting accountability and fostering user trust in critical domains such as healthcare and justice. By addressing challenges like algorithmic bias and ethical transparency, the study underscores the need for equitable and sustainable AI adoption. Rather than focusing solely on immediate trust, we emphasize the normative importance of fostering authentic and enduring trustworthiness in AI systems.
Problem

Research questions and friction points this paper is trying to address.

Examining if AI explainability significantly boosts user trust
Analyzing the correlation between AI explainability and trust
Exploring socio-technical impacts of explainable AI in critical domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-analysis of 90 AI explainability and trust studies
Moderate positive correlation found between explainability and trust
Emphasizes ethical transparency and enduring AI trustworthiness
🔎 Similar Papers
No similar papers found.