Fostering human learning is crucial for boosting human-AI synergy

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current human-AI collaboration suffers from suboptimal efficacy primarily because experimental designs neglect human learning mechanisms, impeding dynamic co-strategy optimization. Method: We conducted a Bayesian meta-regression across 74 empirical studies and developed a systematic human-AI interaction evaluation framework. Contribution/Results: Our analysis uncovers a critical coupling effect between feedback and AI explainability: standalone explanations degrade collaboration performance, whereas their synergistic integration yields significant positive effects. Human learning is thus empirically validated as a central moderating variable in human-AI collaboration. Accordingly, we propose a “learning-centered” interaction paradigm—shifting the field from static performance assessment toward dynamic, adaptive human-AI co-learning research. This work establishes both theoretical foundations and methodological guidance for designing AI systems that actively support sustained human learning and collaborative adaptation.

Technology Category

Application Category

📝 Abstract
The collaboration between humans and artificial intelligence (AI) holds the promise of achieving superior outcomes compared to either acting alone. Nevertheless, our understanding of the conditions that facilitate such human-AI synergy remains limited. A recent meta-analysis showed that, on average, human-AI combinations do not outperform the better individual agent, indicating overall negative human-AI synergy. We argue that this pessimistic conclusion arises from insufficient attention to human learning in the experimental designs used. To substantiate this claim, we re-analyzed all 74 studies included in the original meta-analysis, which yielded two new findings. First, most previous research overlooked design features that foster human learning, such as providing trial-by-trial outcome feedback to participants. Second, our re-analysis, using robust Bayesian meta-regressions, demonstrated that studies providing outcome feedback show relatively higher synergy than those without outcome feedback. Crucially, when feedback is paired with AI explanations we tend to find positive human-AI synergy, while AI explanations provided without feedback were strongly linked to negative synergy, indicating that explanations are useful for synergy only when humans can learn to verify the AI's reliability through feedback. We conclude that the current literature underestimates the potential for human-AI collaboration because it predominantly relies on experimental designs that do not facilitate human learning, thus hindering humans from effectively adapting their collaboration strategies. We therefore advocate for a paradigm shift in human-AI interaction research that explicitly incorporates and tests human learning mechanisms to enhance our understanding of and support for successful human-AI collaboration.
Problem

Research questions and friction points this paper is trying to address.

Investigates why human-AI synergy is often negative in current studies.
Identifies lack of human learning mechanisms like feedback in designs.
Proposes incorporating learning features to unlock positive collaboration potential.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using trial-by-trial outcome feedback to foster human learning
Combining AI explanations with feedback to verify AI reliability
Applying robust Bayesian meta-regressions to re-analyze synergy studies
🔎 Similar Papers
No similar papers found.
J
Julian Berger
Center for Adaptive Rationality, Max Planck Institute for Human Development
Jason W. Burton
Jason W. Burton
Department of Psychology, University of Copenhagen
R
Ralph Hertwig
Center for Adaptive Rationality, Max Planck Institute for Human Development
Thomas Kosch
Thomas Kosch
Junior Professor of Computer Science, Humboldt University of Berlin
Human-AI InteractionHuman AugmentationUser Sensing and InferenceMeta HCI Research
R
Ralf H. J. M. Kurvers
Center for Adaptive Rationality, Max Planck Institute for Human Development
B
Benito Kurzenberger
Department of Psychology & Ergonomics, Technische Universität Berlin
Christopher Lazik
Christopher Lazik
Software Engineering / Human-Computer Interaction, Humboldt-Universität zu Berlin
Software EngineeringHuman-Computer Interaction
L
Linda Onnasch
Department of Psychology & Ergonomics, Technische Universität Berlin
T
Tobias Rieger
Department of Psychology & Ergonomics, Technische Universität Berlin
A
Anna I. Thoma
Center for Adaptive Rationality, Max Planck Institute for Human Development
Dirk U. Wulff
Dirk U. Wulff
Max-Planck-Institute for Human Development & University of Basel
Stefan M. Herzog
Stefan M. Herzog
Senior Researcher, Center for Adaptive Rationality, Max Planck Institute for Human Developm.
boostingJDMhybrid collective intelligenceAIcognition online