Optimizing LLM Annotation of Classroom Discourse through Multi-Agent Orchestration

📅 2026-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of scaling large language model (LLM)–based annotation for high-stakes educational constructs that require contextual understanding, pedagogical judgment, or normative interpretation, where single-call LLM approaches struggle to balance scalability with validity. To bridge this gap, the authors propose a hierarchical, cost-aware multi-agent collaborative framework that models LLM annotation as a three-stage cognitive process: unverified initial labeling, self-verification, and disagreement-driven arbitration. This approach introduces, for the first time, integrated mechanisms for self-verification and independent arbitration. By leveraging prompt engineering, self-consistency checks, and rule-driven arbitration strategies, the method significantly enhances the accuracy and consistency of classroom discourse annotations while maintaining computationally tractable overhead, thereby effectively reconciling the tension between large-scale applicability and annotation validity.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly positioned as scalable tools for annotating educational data, including classroom discourse, interaction logs, and qualitative learning artifacts. Their ability to rapidly summarize instructional interactions and assign rubric-aligned labels has fueled optimism about reducing the cost and time associated with expert human annotation. However, growing evidence suggests that single-pass LLM outputs remain unreliable for high-stakes educational constructs that require contextual, pedagogical, or normative judgment, such as instructional intent or discourse moves. This tension between scale and validity sits at the core of contemporary education data science. In this work, we present and empirically evaluate a hierarchical, cost-aware orchestration framework for LLM-based annotation that improves reliability while explicitly modeling computational tradeoffs. Rather than treating annotation as a one-shot prediction problem, we conceptualize it as a multi-stage epistemic process comprising (1) an unverified single-pass annotation stage, in which models independently assign labels based on the rubric; (2) a self-verification stage, in which each model audits its own output against rubric definitions and revises its label if inconsistencies are detected; and (3) a disagreement-centric adjudication stage, in which an independent adjudicator model examines the verified labels and justifications and determines a final label in accordance with the rubric. This structure mirrors established human annotation workflows in educational research, where initial coding is followed by self-checking and expert resolution of disagreements.
Problem

Research questions and friction points this paper is trying to address.

LLM annotation
classroom discourse
educational data science
annotation reliability
high-stakes constructs
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-agent orchestration
LLM annotation
self-verification
disagreement adjudication
classroom discourse
🔎 Similar Papers
No similar papers found.
B
Bakhtawar Ahtisham
Cornell University, USA
K
Kirk Vanacore
Cornell University, USA
Rene F. Kizilcec
Rene F. Kizilcec
Associate Professor, Cornell University
EducationArtificial IntelligenceTeaching and LearningHCI