Revealing AI Reasoning Increases Trust but Crowds Out Unique Human Knowledge

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how AI reasoning transparency affects human trust in AI recommendations and the utilization of unique human knowledge (UHK)—contextual, experiential, and non-automatable cognitive resources. Method: A pre-registered, incentive-compatible behavioral experiment manipulated transparency levels (no reasoning, brief reasoning, detailed reasoning) to quantify trade-offs between trust in AI advice and UHK deployment during decision-making. Results: Transparency—regardless of reasoning depth—significantly increased trust and AI adoption but concurrently suppressed UHK usage, systematically marginalizing human contextual knowledge. These findings challenge the prevailing assumption that transparency inherently fosters calibrated, rational trust. Instead, they provide the first empirical evidence that transparency can induce intuitive overtrust, thereby crowding out irreplaceable human cognitive contributions. The study thus offers critical theoretical insights and practical implications for designing trust-calibration mechanisms and knowledge-complementary frameworks in human-AI collaboration.

Technology Category

Application Category

📝 Abstract
Effective human-AI collaboration requires humans to accurately gauge AI capabilities and calibrate their trust accordingly. Humans often have context-dependent private information, referred to as Unique Human Knowledge (UHK), that is crucial for deciding whether to accept or override AI's recommendations. We examine how displaying AI reasoning affects trust and UHK utilization through a pre-registered, incentive-compatible experiment (N = 752). We find that revealing AI reasoning, whether brief or extensive, acts as a powerful persuasive heuristic that significantly increases trust and agreement with AI recommendations. Rather than helping participants appropriately calibrate their trust, this transparency induces over-trust that crowds out UHK utilization. Our results highlight the need for careful consideration when revealing AI reasoning and call for better information design in human-AI collaboration systems.
Problem

Research questions and friction points this paper is trying to address.

Examining how AI reasoning disclosure affects human trust calibration
Investigating whether transparency crowds out unique human knowledge utilization
Assessing persuasive effects of AI explanations on human decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

Displaying AI reasoning increases human trust
Transparency induces over-trust in AI systems
Revealing AI reasoning crowds out human knowledge
🔎 Similar Papers
No similar papers found.
Z
Zenan Chen
Naveen Jindal School of Management, University of Texas at Dallas, Richardson, TX, USA
Ruijiang Gao
Ruijiang Gao
University of Texas at Dallas
Machine LearningHuman-AI SystemsCausal InferenceGenerative Models
Y
Yingzhi Liang
Naveen Jindal School of Management, University of Texas at Dallas, Richardson, TX, USA