🤖 AI Summary
K–12 STEM teachers face high cognitive load, low acceptance, and non-actionable feedback when using AI-generated automated assessment reports (AutoRs). Method: This study proposes the first teacher-centered conceptual framework for AutoRs, integrating Cognitive Load Theory, human factors engineering, and multimodal interaction design. Through a systematic review of existing practices, we identify two prevalent shortcomings: insufficient teacher involvement and poor interface usability. We then develop a prototype system supporting real-time interaction, explainable visualizations, and formative feedback. Contribution/Results: Our work establishes a “usability–functionality” balanced design paradigm; provides a reusable methodology for developing educational AI reporting tools; and empirically validates significant reductions in teacher cognitive load and notable improvements in feedback adoption rates.
📝 Abstract
This paper provides a comprehensive review of the design and implementation of automatically generated assessment reports (AutoRs) for formative use in K-12 Science, Technology, Engineering, and Mathematics (STEM) classrooms. With the increasing adoption of technology-enhanced assessments, there is a critical need for human-computer interactive tools that efficiently support the interpretation and application of assessment data by teachers. AutoRs are designed to provide synthesized, interpretable, and actionable insights into students' performance, learning progress, and areas for improvement. Guided by cognitive load theory, this study emphasizes the importance of reducing teachers' cognitive demands through user-centered and intuitive designs. It highlights the potential of diverse information presentation formats such as text, visual aids, and plots and advanced functionalities such as live and interactive features to enhance usability. However, the findings also reveal that many existing AutoRs fail to fully utilize these approaches, leading to high initial cognitive demands and limited engagement. This paper proposes a conceptual framework to inform the design, implementation, and evaluation of AutoRs, balancing the trade-offs between usability and functionality. The framework aims to address challenges in engaging teachers with technology-enhanced assessment results, facilitating data-driven decision-making, and providing personalized feedback to improve the teaching and learning process.