AI Knows Best? The Paradox of Expertise, AI-Reliance, and Performance in Educational Tutoring Decision-Making Tasks

📅 2025-09-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates human judgment accuracy and reliance mechanisms in AI-augmented educational tutoring decisions. Through a controlled experiment, it compares experienced tutors and non-tutors in evaluating the correctness of teaching feedback under two explainable AI interfaces: textual explanations versus inline highlighting. Results show that: (1) Non-tutors exhibit higher compliance with AI recommendations and achieve significantly improved accuracy, whereas experts frequently override correct AI judgments, forfeiting performance gains; (2) Textual explanations induce unwarranted overreliance, while inline highlighting does not—but neither explanation modality improves final accuracy, and both increase decision latency; (3) Human-AI collaboration outperforms unassisted human performance but remains inferior to AI-only performance. The study identifies expertise level as a critical moderator of AI adoption behavior and provides the first empirical evidence of an interaction effect between explanation modality and user expertise on collaborative efficacy—yielding theoretical insights and practical implications for trustworthy AI design in education.

Technology Category

Application Category

📝 Abstract
We present an empirical study of how both experienced tutors and non-tutors judge the correctness of tutor praise responses under different Artificial Intelligence (AI)-assisted interfaces, types of explanation (textual explanations vs. inline highlighting). We first fine-tuned several Large Language Models (LLMs) to produce binary correctness labels and explanations, achieving up to 88% accuracy and 0.92 F1 score with GPT-4. We then let the GPT-4 models assist 95 participants in tutoring decision-making tasks by offering different types of explanations. Our findings show that although human-AI collaboration outperforms humans alone in evaluating tutor responses, it remains less accurate than AI alone. Moreover, we find that non-tutors tend to follow the AI's advice more consistently, which boosts their overall accuracy on the task: especially when the AI is correct. In contrast, experienced tutors often override the AI's correct suggestions and thus miss out on potential gains from the AI's generally high baseline accuracy. Further analysis reveals that explanations in text reasoning will increase over-reliance and reduce underreliance, while inline highlighting does not. Moreover, neither explanation style actually has a significant effect on performance and costs participants more time to complete the task, instead of saving time. Our findings reveal a tension between expertise, explanation design, and efficiency in AI-assisted decision-making, highlighting the need for balanced approaches that foster more effective human-AI collaboration.
Problem

Research questions and friction points this paper is trying to address.

Examining how AI assistance affects tutor decision-making accuracy
Investigating how expertise influences reliance on AI recommendations
Evaluating explanation styles' impact on human-AI collaboration efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned LLMs for binary correctness labeling
Compared textual explanations versus inline highlighting
Analyzed expert versus non-expert AI reliance patterns
🔎 Similar Papers
No similar papers found.