Pedagogy-driven Evaluation of Generative AI-powered Intelligent Tutoring Systems

📅 2025-10-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current generative-AI–driven Intelligent Tutoring Systems (ITS) lack an educationally grounded, reliable, and generalizable evaluation framework, impeding progress tracking, result comparability, and cross-context generalizability. Method: We propose the first learning-science–informed evaluation framework for generative-AI–enhanced ITS, structured around three core dimensions—fairness, uniformity, and scalability—to overcome limitations of subjective scoring and nonstandardized benchmarks. Our interdisciplinary approach integrates large language models, educational dialogue analytics, and foundational theories from cognitive psychology and pedagogy. We systematically survey state-of-the-art practices and conduct in-depth analyses of real-world implementations to identify critical bottlenecks. Contribution: The framework establishes a theoretically rigorous foundation and actionable methodology for developing trustworthy, reproducible, and comparable evaluation benchmarks for generative-AI–powered ITS, enabling principled advancement in AI-enhanced education research and practice.

Technology Category

Application Category

📝 Abstract
The interdisciplinary research domain of Artificial Intelligence in Education (AIED) has a long history of developing Intelligent Tutoring Systems (ITSs) by integrating insights from technological advancements, educational theories, and cognitive psychology. The remarkable success of generative AI (GenAI) models has accelerated the development of large language model (LLM)-powered ITSs, which have potential to imitate human-like, pedagogically rich, and cognitively demanding tutoring. However, the progress and impact of these systems remain largely untraceable due to the absence of reliable, universally accepted, and pedagogy-driven evaluation frameworks and benchmarks. Most existing educational dialogue-based ITS evaluations rely on subjective protocols and non-standardized benchmarks, leading to inconsistencies and limited generalizability. In this work, we take a step back from mainstream ITS development and provide comprehensive state-of-the-art evaluation practices, highlighting associated challenges through real-world case studies from careful and caring AIED research. Finally, building on insights from previous interdisciplinary AIED research, we propose three practical, feasible, and theoretically grounded research directions, rooted in learning science principles and aimed at establishing fair, unified, and scalable evaluation methodologies for ITSs.
Problem

Research questions and friction points this paper is trying to address.

Lack of pedagogy-driven evaluation frameworks for AI tutors
Inconsistent benchmarks limit assessment of educational AI systems
Need standardized methodologies to evaluate intelligent tutoring systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposing pedagogy-driven evaluation frameworks for ITS
Establishing standardized benchmarks for AI tutoring systems
Integrating learning science principles into assessment methodologies