🤖 AI Summary
To address inefficient knowledge transfer in human-robot collaborative teaching caused by misalignment between human teachers’ and robot learners’ mental models, this paper proposes the MMM Score—a quantitative evaluation metric—and introduces, for the first time, a teaching-intention-driven dynamic feedback paradigm that explicitly models and closed-loop optimizes mental model discrepancies. Methodologically, it integrates large language models to parse natural-language teaching intentions, cognitive alignment modeling, and adaptive feedback generation. In a virtual robot puzzle-teaching experiment involving 150 participants, the approach improved teaching success rate by 37%, significantly reduced teacher misinterpretation rates, and enhanced teachers’ understanding of robot learning mechanisms. The core contributions are: (1) establishing a quantifiable mental model alignment assessment framework; and (2) advancing beyond conventional task-performance-only feedback toward intention-driven, bidirectional cognitive co-adaptation.
📝 Abstract
The rapid development of artificial intelligence and robotics has had a significant impact on our lives, with intelligent systems increasingly performing tasks traditionally performed by humans. Efficient knowledge transfer requires matching the mental model of the human teacher with the capabilities of the robot learner. This paper introduces the Mental Model Mismatch (MMM) Score, a feedback mechanism designed to quantify and reduce mismatches by aligning human teaching behavior with robot learning behavior. Using Large Language Models (LLMs), we analyze teacher intentions in natural language to generate adaptive feedback. A study with 150 participants teaching a virtual robot to solve a puzzle game shows that intention-based feedback significantly outperforms traditional performance-based feedback or no feedback. The results suggest that intention-based feedback improves instructional outcomes, improves understanding of the robot's learning process and reduces misconceptions. This research addresses a critical gap in human-robot interaction (HRI) by providing a method to quantify and mitigate discrepancies between human mental models and robot capabilities, with the goal of improving robot learning and human teaching effectiveness.