🤖 AI Summary
This study addresses two key challenges in deploying generative AI for education: the difficulty of translating pedagogical intuition into effective prompts, and the lack of consensus on defining evidence-based teaching practices. Methodologically, we propose an assessment-driven, responsible AI development paradigm: (1) we construct seven educationally grounded benchmarks rooted in learning science; (2) we design a fine-grained instruction-tuning dataset and train LearnLM-Tutor, a domain-specialized large language model; and (3) we introduce a novel multi-dimensional evaluation framework integrating quantitative/qualitative and automated/human assessments, coupled with an “evaluation–feedback–fine-tuning” closed-loop mechanism. Our contribution is the first systematic integration of educational benchmarks with foundation models—specifically enabling pedagogically informed adaptation of Gemini. Experiments demonstrate that LearnLM-Tutor significantly outperforms prompt-engineered baselines in instructional supportiveness, explanatory clarity, and cognitive alignment, earning consistent preference from both teachers and students—establishing a scalable methodological foundation for educational AI evaluation.
📝 Abstract
A major challenge facing the world is the provision of equitable and universal access to quality education. Recent advances in generative AI (gen AI) have created excitement about the potential of new technologies to offer a personal tutor for every learner and a teaching assistant for every teacher. The full extent of this dream, however, has not yet materialised. We argue that this is primarily due to the difficulties with verbalising pedagogical intuitions into gen AI prompts and the lack of good evaluation practices, reinforced by the challenges in defining excellent pedagogy. Here we present our work collaborating with learners and educators to translate high level principles from learning science into a pragmatic set of seven diverse educational benchmarks, spanning quantitative, qualitative, automatic and human evaluations; and to develop a new set of fine-tuning datasets to improve the pedagogical capabilities of Gemini, introducing LearnLM-Tutor. Our evaluations show that LearnLM-Tutor is consistently preferred over a prompt tuned Gemini by educators and learners on a number of pedagogical dimensions. We hope that this work can serve as a first step towards developing a comprehensive educational evaluation framework, and that this can enable rapid progress within the AI and EdTech communities towards maximising the positive impact of gen AI in education.