Beyond AI advice -- independent aggregation boosts human-AI accuracy

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge that human users often struggle to accurately assess the reliability of AI-generated advice in traditional AI-as-advisor paradigms, leading to either overreliance on or unwarranted dismissal of correct recommendations—and potentially eroding their own decision-making capabilities. To mitigate this issue, the authors propose the Hybrid Confirmation Tree (HCT) framework, which first elicits independent initial judgments from both human and AI agents; when these judgments align, the decision is accepted outright, but when they diverge, a second human arbitrator resolves the discrepancy. By preserving judgment independence and incorporating an arbitration mechanism, HCT enhances system transparency and robustness while curbing both overtrust and misjudgment of AI advice. Experiments across ten diverse datasets demonstrate that HCT significantly outperforms conventional AI-advisor approaches in nearly all settings—even when the AI provides explanatory justifications—substantially improving the accuracy of human-AI collaborative decision-making.
📝 Abstract
Artificial intelligence (AI) is broadly deployed as an advisor to human decision-makers: AI recommends a decision and a human accepts or rejects the advice. This approach, however, has several limitations: People frequently ignore accurate advice and rely too much on inaccurate advice, and their decision-making skills may deteriorate over time. Here, we compare the AI-as-advisor approach to the hybrid confirmation tree (HCT), an alternative strategy that preserves the independence of human and AI judgments. The HCT elicits a human judgment and an AI judgment independently of each other. If they agree, that decision is accepted. If not, a second human breaks the tie. For the comparison, we used 10 datasets from various domains, including medical diagnostics and misinformation discernment, and a subset of four datasets in which AI also explained its decision. The HCT outperformed the AI-as-advisor approach in all datasets. The HCT also performed better in almost all cases in which AI offered an explanation of its judgment. Using signal detection theory to interpret these results, we find that the HCT outperforms the AI-as-advisor approach because people cannot discriminate well enough between correct and incorrect AI advice. Overall, the HCT is a robust, accurate, and transparent alternative to the AI-as-advisor approach, offering a simple mechanism to tap into the wisdom of hybrid crowds.
Problem

Research questions and friction points this paper is trying to address.

AI advice
human-AI collaboration
decision-making accuracy
judgment independence
hybrid decision systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid Confirmation Tree
human-AI collaboration
independent aggregation
decision accuracy
signal detection theory
🔎 Similar Papers
No similar papers found.
J
Julian Berger
Max Planck Institute for Human Development
Pantelis P. Analytis
Pantelis P. Analytis
Associate Professor, Department of Business and Management, University of Southern Denmark
Cognitive ScienceBehavioral EconomicsComputational Social ScienceMachine Learning
V
Ville Satopää
INSEAD; Science of Intelligence, Technical University Berlin
R
Ralf H. J. M. Kurvers
Max Planck Institute for Human Development