Performance and Metacognition Disconnect when Reasoning in Human-AI Interaction

📅 2024-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the accuracy of human self-assessment in AI-augmented logical reasoning and its relationship with AI literacy. Method: We conducted a two-stage large-scale behavioral experiment (N = 246 & 452), integrating metacognitive calibration analysis and confidence–accuracy computational modeling. Results: AI significantly improved task performance (+3 points), yet participants consistently overestimated their scores (mean overestimation: +4 points). Crucially, higher AI literacy correlated with greater confidence, poorer calibration, and lower metacognitive accuracy—contrary to conventional assumptions. We provide the first empirical evidence that AI assistance attenuates the Dunning–Kruger effect, challenging the intuitive notion that technical proficiency enhances self-assessment accuracy. Our findings establish a robust negative correlation between AI literacy and metacognitive accuracy, offering foundational theoretical insights for cognitive assessment and system design in trustworthy human–AI collaboration.

Technology Category

Application Category

📝 Abstract
Optimizing human-AI interaction requires users to reflect on their own performance critically. Our paper examines whether people using AI to complete tasks can accurately monitor how well they perform. In Study 1, participants (N = 246) used AI to solve 20 logical problems from the Law School Admission Test. While their task performance improved by three points compared to a norm population, participants overestimated their performance by four points. Interestingly, higher AI literacy was linked to less accurate self-assessment. Participants with more technical knowledge of AI were more confident but less precise in judging their own performance. Using a computational model, we explored individual differences in metacognitive accuracy and found that the Dunning-Kruger effect, usually observed in this task, ceased to exist with AI. Study 2 (N = 452) replicates these findings. We discuss how AI levels metacognitive performance and consider consequences of performance overestimation for interactive AI systems enhancing cognition.
Problem

Research questions and friction points this paper is trying to address.

Human-AI Collaboration
Self-Assessment Accuracy
AI Literacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI-assisted problem solving
self-assessment accuracy
human-AI collaboration optimization
🔎 Similar Papers
No similar papers found.