🤖 AI Summary
High-quality teacher feedback data is critical for English AI tutoring systems, yet manual annotation is prohibitively expensive. To address this, we propose a human-in-the-loop data construction paradigm: leveraging a small seed set of high-quality human feedback, we employ large language models (LLMs) to generate diverse pedagogical feedback. We further design three complementary dataset variants—base generation, semantic enhancement, and error correction—to jointly improve data diversity, fidelity, and instructional relevance. Our key innovation lies in guiding LLMs with only 5–10% human-annotated data, substantially enhancing the pedagogical alignment and training efficacy of synthetic feedback. Experiments demonstrate that models trained on our augmented data outperform fully human-annotated baselines across multiple evaluation metrics, achieving a 2.3× improvement in cost-efficiency (performance per unit annotation cost). This work establishes a reproducible, low-cost, high-fidelity framework for educational data construction.
📝 Abstract
In English education tutoring, teacher feedback is essential for guiding students. Recently, AI-based tutoring systems have emerged to assist teachers; however, these systems require high-quality and large-scale teacher feedback data, which is both time-consuming and costly to generate manually. In this study, we propose FEAT, a cost-effective framework for generating teacher feedback, and have constructed three complementary datasets: (1) DIRECT-Manual (DM), where both humans and large language models (LLMs) collaboratively generate high-quality teacher feedback, albeit at a higher cost; (2) DIRECT-Generated (DG), an LLM-only generated, cost-effective dataset with lower quality;, and (3) DIRECT-Augmented (DA), primarily based on DG with a small portion of DM added to enhance quality while maintaining cost-efficiency. Experimental results showed that incorporating a small portion of DM (5-10%) into DG leads to superior performance compared to using 100% DM alone.