🤖 AI Summary
This study addresses the inefficiency and time intensity of manually generating personalized, standards-aligned feedback by science teachers. We propose an educator-in-the-loop modular large language model system. Our approach features: (1) an error-aware assessment module that precisely identifies conceptual misconceptions in student responses; and (2) a curriculum-alignment generation mechanism based on topic-aware memory chains—replacing conventional retrieval—to significantly improve semantic consistency between feedback and curricular standards while suppressing noise. The system integrates structured curriculum knowledge, a lightweight error detection model, and an interactive supervision interface, enabling teachers to customize feedback policies and intervene in real time. Experiments demonstrate that our method reduces teacher feedback preparation time by 62% while preserving feedback quality and interpretability, and simultaneously increases student feedback adoption rates and learning outcomes—achieving scalability and pedagogical applicability.
📝 Abstract
Effective feedback is essential for student learning but is time-intensive for teachers. We present LearnLens, a modular, LLM-based system that generates personalised, curriculum-aligned feedback in science education. LearnLens comprises three components: (1) an error-aware assessment module that captures nuanced reasoning errors; (2) a curriculum-grounded generation module that uses a structured, topic-linked memory chain rather than traditional similarity-based retrieval, improving relevance and reducing noise; and (3) an educator-in-the-loop interface for customisation and oversight. LearnLens addresses key challenges in existing systems, offering scalable, high-quality feedback that empowers both teachers and students.