🤖 AI Summary
This work addresses the challenge of scaling large language model (LLM)–based annotation for high-stakes educational constructs that require contextual understanding, pedagogical judgment, or normative interpretation, where single-call LLM approaches struggle to balance scalability with validity. To bridge this gap, the authors propose a hierarchical, cost-aware multi-agent collaborative framework that models LLM annotation as a three-stage cognitive process: unverified initial labeling, self-verification, and disagreement-driven arbitration. This approach introduces, for the first time, integrated mechanisms for self-verification and independent arbitration. By leveraging prompt engineering, self-consistency checks, and rule-driven arbitration strategies, the method significantly enhances the accuracy and consistency of classroom discourse annotations while maintaining computationally tractable overhead, thereby effectively reconciling the tension between large-scale applicability and annotation validity.
📝 Abstract
Large language models (LLMs) are increasingly positioned as scalable tools for annotating educational data, including classroom discourse, interaction logs, and qualitative learning artifacts. Their ability to rapidly summarize instructional interactions and assign rubric-aligned labels has fueled optimism about reducing the cost and time associated with expert human annotation. However, growing evidence suggests that single-pass LLM outputs remain unreliable for high-stakes educational constructs that require contextual, pedagogical, or normative judgment, such as instructional intent or discourse moves. This tension between scale and validity sits at the core of contemporary education data science. In this work, we present and empirically evaluate a hierarchical, cost-aware orchestration framework for LLM-based annotation that improves reliability while explicitly modeling computational tradeoffs. Rather than treating annotation as a one-shot prediction problem, we conceptualize it as a multi-stage epistemic process comprising (1) an unverified single-pass annotation stage, in which models independently assign labels based on the rubric; (2) a self-verification stage, in which each model audits its own output against rubric definitions and revises its label if inconsistencies are detected; and (3) a disagreement-centric adjudication stage, in which an independent adjudicator model examines the verified labels and justifications and determines a final label in accordance with the rubric. This structure mirrors established human annotation workflows in educational research, where initial coding is followed by self-checking and expert resolution of disagreements.