🤖 AI Summary
Current human-AI collaboration suffers from suboptimal efficacy primarily because experimental designs neglect human learning mechanisms, impeding dynamic co-strategy optimization. Method: We conducted a Bayesian meta-regression across 74 empirical studies and developed a systematic human-AI interaction evaluation framework. Contribution/Results: Our analysis uncovers a critical coupling effect between feedback and AI explainability: standalone explanations degrade collaboration performance, whereas their synergistic integration yields significant positive effects. Human learning is thus empirically validated as a central moderating variable in human-AI collaboration. Accordingly, we propose a “learning-centered” interaction paradigm—shifting the field from static performance assessment toward dynamic, adaptive human-AI co-learning research. This work establishes both theoretical foundations and methodological guidance for designing AI systems that actively support sustained human learning and collaborative adaptation.
📝 Abstract
The collaboration between humans and artificial intelligence (AI) holds the promise of achieving superior outcomes compared to either acting alone. Nevertheless, our understanding of the conditions that facilitate such human-AI synergy remains limited. A recent meta-analysis showed that, on average, human-AI combinations do not outperform the better individual agent, indicating overall negative human-AI synergy. We argue that this pessimistic conclusion arises from insufficient attention to human learning in the experimental designs used. To substantiate this claim, we re-analyzed all 74 studies included in the original meta-analysis, which yielded two new findings. First, most previous research overlooked design features that foster human learning, such as providing trial-by-trial outcome feedback to participants. Second, our re-analysis, using robust Bayesian meta-regressions, demonstrated that studies providing outcome feedback show relatively higher synergy than those without outcome feedback. Crucially, when feedback is paired with AI explanations we tend to find positive human-AI synergy, while AI explanations provided without feedback were strongly linked to negative synergy, indicating that explanations are useful for synergy only when humans can learn to verify the AI's reliability through feedback. We conclude that the current literature underestimates the potential for human-AI collaboration because it predominantly relies on experimental designs that do not facilitate human learning, thus hindering humans from effectively adapting their collaboration strategies. We therefore advocate for a paradigm shift in human-AI interaction research that explicitly incorporates and tests human learning mechanisms to enhance our understanding of and support for successful human-AI collaboration.