How well do Large Language Models Recognize Instructional Moves? Establishing Baselines for Foundation Models in Educational Discourse

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of systematic evaluation of large language models’ (LLMs) generalizability in educational technology by assessing their zero-shot capability to identify instructional moves in authentic classroom discourse. Method: We benchmark six foundational LLMs on an expert-annotated corpus of real classroom transcripts, employing zero-shot, one-shot, and few-shot prompting strategies for fine-grained classification; inter-annotator agreement is quantified via Cohen’s Kappa. Contribution/Results: We establish the first baseline performance map for LLMs in educational discourse understanding. Results reveal that prompt engineering improves performance but cannot overcome inherent reliability limits; substantial class heterogeneity and precision–recall trade-offs persist. The best few-shot configuration achieves κ = 0.58; zero-shot performance is moderate. These findings confirm the task’s difficulty and expose fundamental model limitations in pedagogically grounded behavioral classification.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly adopted in educational technologies for a variety of tasks, from generating instructional materials and assisting with assessment design to tutoring. While prior work has investigated how models can be adapted or optimized for specific tasks, far less is known about how well LLMs perform at interpreting authentic educational scenarios without significant customization. As LLM-based systems become widely adopted by learners and educators in everyday academic contexts, understanding their out-of-the-box capabilities is increasingly important for setting expectations and benchmarking. We compared six LLMs to estimate their baseline performance on a simple but important task: classifying instructional moves in authentic classroom transcripts. We evaluated typical prompting methods: zero-shot, one-shot, and few-shot prompting. We found that while zero-shot performance was moderate, providing comprehensive examples (few-shot prompting) significantly improved performance for state-of-the-art models, with the strongest configuration reaching Cohen's Kappa = 0.58 against expert-coded annotations. At the same time, improvements were neither uniform nor complete: performance varied considerably by instructional move, and higher recall frequently came at the cost of increased false positives. Overall, these findings indicate that foundation models demonstrate meaningful yet limited capacity to interpret instructional discourse, with prompt design helping to surface capability but not eliminating fundamental reliability constraints.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLMs' baseline performance in classifying instructional moves
Compares zero-shot, one-shot, and few-shot prompting methods for educational discourse
Assesses out-of-the-box capabilities and reliability constraints in authentic classroom scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated six LLMs on instructional move classification
Used zero-shot, one-shot, and few-shot prompting methods
Found few-shot prompting significantly improved model performance
🔎 Similar Papers
No similar papers found.