๐ค AI Summary
This work addresses the challenge teachers face in effectively monitoring student interactions with generative AI while safeguarding privacy, particularly in identifying learning difficulties, alignment with educational objectives, and compliance risks. To this end, the authors propose a metacognition-inspired dashboard system that leverages generative AI to automatically produce structured conversational summaries as privacy-preserving substitutes for raw chat logs. These summaries reveal studentsโ interaction trajectories, usage patterns, and potential issues without exposing sensitive content. Developed through co-design with teachers and students, the system integrates evidence-based visualizations, governance mechanisms, and class-level analytics. Preliminary evaluation demonstrates that the approach substantially reduces teachersโ cognitive load, enhances trust in the generated summaries, and achieves high acceptability regarding privacy preservation, offering a viable pathway to balance pedagogical insight and data protection in AI-enhanced educational settings.
๐ Abstract
Generative AI tools are increasingly used for coursework help, shifting much of students' help-seeking and reasoning into student-AI chats that are largely invisible to instructors. This loss of visibility can weaken instructors' ability to understand students' difficulties, ensure alignment with course goals, and uphold course policies. Yet transcript-level access is neither scalable nor ethically straightforward: reading raw chat logs across a class is impractical, and exposing detailed dialogue can raise privacy concerns and chilling effects on help seeking. As a result, instructors face a tension between needing actionable insight and avoiding default surveillance of student conversations. To address this gap, we propose a meta-reflective dashboard that makes student-AI sessions interpretable without exposing raw chat logs by default. After each help-seeking session, a reflection AI produces a structured, session-level summary of the student's interaction trajectory, AI usage patterns, and potential risks. We co-designed the dashboard with instructors and students to surface key challenges and design goals, and conducted a formative evaluation of perceived usefulness, trust in the summaries, and privacy acceptability. Findings suggest that the proposed dashboard can reduce instructors' sensemaking effort while mitigating privacy concerns associated with transcript-level access, and they also yield design implications for evidence, governance, and scalable class-level analytics for AI-supported learning.