Human-Centered Design for AI-based Automatically Generated Assessment Reports: A Systematic Review

📅 2024-12-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
K–12 STEM teachers face high cognitive load, low acceptance, and non-actionable feedback when using AI-generated automated assessment reports (AutoRs). Method: This study proposes the first teacher-centered conceptual framework for AutoRs, integrating Cognitive Load Theory, human factors engineering, and multimodal interaction design. Through a systematic review of existing practices, we identify two prevalent shortcomings: insufficient teacher involvement and poor interface usability. We then develop a prototype system supporting real-time interaction, explainable visualizations, and formative feedback. Contribution/Results: Our work establishes a “usability–functionality” balanced design paradigm; provides a reusable methodology for developing educational AI reporting tools; and empirically validates significant reductions in teacher cognitive load and notable improvements in feedback adoption rates.

Technology Category

Application Category

📝 Abstract
This paper provides a comprehensive review of the design and implementation of automatically generated assessment reports (AutoRs) for formative use in K-12 Science, Technology, Engineering, and Mathematics (STEM) classrooms. With the increasing adoption of technology-enhanced assessments, there is a critical need for human-computer interactive tools that efficiently support the interpretation and application of assessment data by teachers. AutoRs are designed to provide synthesized, interpretable, and actionable insights into students' performance, learning progress, and areas for improvement. Guided by cognitive load theory, this study emphasizes the importance of reducing teachers' cognitive demands through user-centered and intuitive designs. It highlights the potential of diverse information presentation formats such as text, visual aids, and plots and advanced functionalities such as live and interactive features to enhance usability. However, the findings also reveal that many existing AutoRs fail to fully utilize these approaches, leading to high initial cognitive demands and limited engagement. This paper proposes a conceptual framework to inform the design, implementation, and evaluation of AutoRs, balancing the trade-offs between usability and functionality. The framework aims to address challenges in engaging teachers with technology-enhanced assessment results, facilitating data-driven decision-making, and providing personalized feedback to improve the teaching and learning process.
Problem

Research questions and friction points this paper is trying to address.

AI-generated assessment reports
K-12 STEM education
Teacher understanding and acceptance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-Centric Design
AI-Generated Assessment Reports
Personalized Feedback
🔎 Similar Papers
No similar papers found.
Ehsan Latif
Ehsan Latif
University of Georgia
Multi-robot systemsMachine LearningAIED
Y
Ying Chen
Illinois Workforce Educational Research Collaborative, University of Illinois System, Chicago, 60606, IL, USA
Xiaoming Zhai
Xiaoming Zhai
Associate Professor, University of Georgia
Science EducationAIAssessment
Y
Yue Yin
Department of Educational Psychology, University of Illinois Chicago, Chicago, 60607, IL, USA