🤖 AI Summary
Heterogeneity in students’ mathematical proficiency and motivation complicates differentiated instruction, while existing AI tools predominantly focus on student performance metrics and overlook teachers’ pedagogical needs. To address this gap, we propose a teacher-centered multi-agent system that jointly models learners’ cognitive levels and intrinsic motivation—marking the first such integration in AI-assisted education. Leveraging large language models (LLMs), we design a tripartite agent architecture: a Learner Agent simulating individual student profiles; a Teacher Agent implementing evidence-based instructional principles; and an Evaluator Agent performing automated content quality assessment. The system generates personalized instructional materials aligned with Grade 8 mathematics curricula and provides real-time, actionable feedback on material quality. Empirical evaluation demonstrates stable output generation, strong curriculum alignment, and high adaptability. Teacher evaluations confirm its pedagogical validity and task-structure coherence, significantly enhancing support for scalable, individualized mathematics instruction.
📝 Abstract
The increasing heterogeneity of student populations poses significant challenges for teachers, particularly in mathematics education, where cognitive, motivational, and emotional differences strongly influence learning outcomes. While AI-driven personalization tools have emerged, most remain performance-focused, offering limited support for teachers and neglecting broader pedagogical needs. This paper presents the FACET framework, a teacher-facing, large language model (LLM)-based multi-agent system designed to generate individualized classroom materials that integrate both cognitive and motivational dimensions of learner profiles. The framework comprises three specialized agents: (1) learner agents that simulate diverse profiles incorporating topic proficiency and intrinsic motivation, (2) a teacher agent that adapts instructional content according to didactical principles, and (3) an evaluator agent that provides automated quality assurance. We tested the system using authentic grade 8 mathematics curriculum content and evaluated its feasibility through a) automated agent-based assessment of output quality and b) exploratory feedback from K-12 in-service teachers. Results from ten internal evaluations highlighted high stability and alignment between generated materials and learner profiles, and teacher feedback particularly highlighted structure and suitability of tasks. The findings demonstrate the potential of multi-agent LLM architectures to provide scalable, context-aware personalization in heterogeneous classroom settings, and outline directions for extending the framework to richer learner profiles and real-world classroom trials.