When AI Meets Early Childhood Education: Large Language Models as Assessment Teammates in Chinese Preschools

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of scaling and sustaining high-quality monitoring of teacher–child interactions within China’s vast preschool system, which serves 36 million children and where traditional expert evaluations are impractical for routine use. To this end, the authors construct TEPE-TCI-370h, the first large-scale Chinese-language dataset of kindergarten teacher–child interactions, and introduce Interaction2Eval—a novel framework tailored for early childhood education that integrates large language models, child speech recognition, homophone disambiguation for Mandarin, and rubric-based reasoning to automatically extract structured quality indicators from naturalistic classroom interactions. Empirical validation across 43 classrooms demonstrates an 18-fold increase in assessment efficiency and 88% agreement with expert judgments, enabling a shift from annual audits to frequent, low-cost, and precise monthly AI-assisted quality monitoring.

Technology Category

Application Category

📝 Abstract
High-quality teacher-child interaction (TCI) is fundamental to early childhood development, yet traditional expert-based assessment faces a critical scalability challenge. In large systems like China's-serving 36 million children across 250,000+ kindergartens-the cost and time requirements of manual observation make continuous quality monitoring infeasible, relegating assessment to infrequent episodic audits that limit timely intervention and improvement tracking. In this paper, we investigate whether AI can serve as a scalable assessment teammate by extracting structured quality indicators and validating their alignment with human expert judgments. Our contributions include: (1) TEPE-TCI-370h (Tracing Effective Preschool Education), the first large-scale dataset of naturalistic teacher-child interactions in Chinese preschools (370 hours, 105 classrooms) with standardized ECQRS-EC and SSTEW annotations; (2) We develop Interaction2Eval, a specialized LLM-based framework addressing domain-specific challenges-child speech recognition, Mandarin homophone disambiguation, and rubric-based reasoning-achieving up to 88% agreement; (3) Deployment validation across 43 classrooms demonstrating an 18x efficiency gain in the assessment workflow, highlighting its potential for shifting from annual expert audits to monthly AI-assisted monitoring with targeted human oversight. This work not only demonstrates the technical feasibility of scalable, AI-augmented quality assessment but also lays the foundation for a new paradigm in early childhood education-one where continuous, inclusive, AI-assisted evaluation becomes the engine of systemic improvement and equitable growth.
Problem

Research questions and friction points this paper is trying to address.

teacher-child interaction
early childhood education
scalable assessment
quality monitoring
expert-based evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Teacher-Child Interaction
Early Childhood Education
AI-assisted Assessment
Scalable Evaluation
🔎 Similar Papers
No similar papers found.
X
Xingming Li
National University of Defense Technology, Changsha, China
R
Runke Huang
The Chinese University of Hong Kong, Shenzhen, China
Yanan Bao
Yanan Bao
Google Deepmind
Machine LearningData MiningGreen Communications
Y
Yuye Jin
The Chinese University of Hong Kong, Shenzhen, China
Y
Yuru Jiao
The Chinese University of Hong Kong, Shenzhen, China
Qingyong Hu
Qingyong Hu
Ph.D. of Computer Science, University of Oxford
3D VisionPhotogrammetryPoint Cloud ProcessingAutonomous Driving