Gaze to Insight: A Scalable AI Approach for Detecting Gaze Behaviours in Face-to-Face Collaborative Learning

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limitations of existing gaze behavior detection methods, which rely heavily on labor-intensive manual annotations and exhibit poor generalization, hindering their scalability in real-world collaborative learning environments. To overcome these challenges, this work proposes a novel, annotation-free, and scalable AI framework that integrates YOLOE-26—enhanced with textual prompting capabilities—and the Gaze-LLE model to achieve zero-shot, cross-setup robust gaze target recognition for the first time. The framework further leverages a pretrained YOLO11 model for person tracking and employs YOLOE-26 for detecting educational objects. Evaluated on video data, the method achieves an F1 score of 0.829, demonstrating superior performance in identifying gazes toward laptops and peers, and significantly outperforming conventional supervised approaches in complex scenarios, with enhanced stability and generalization capability.
📝 Abstract
Previous studies have illustrated the potential of analysing gaze behaviours in collaborative learning to provide educationally meaningful information for students to reflect on their learning. Over the past decades, machine learning approaches have been developed to automatically detect gaze behaviours from video data. Yet, since these approaches often require large amounts of labelled data for training, human annotation remains necessary. Additionally, researchers have questioned the cross-configuration robustness of machine learning models developed, as training datasets often fail to encompass the full range of situations encountered in educational contexts. To address these challenges, this study proposes a scalable artificial intelligence approach that leverages pretrained and foundation models to automatically detect gaze behaviours in face-to-face collaborative learning contexts without requiring human-annotated data. The approach utilises pretrained YOLO11 for person tracking, YOLOE-26 with text-prompt capability for education-related object detection, and the Gaze-LLE model for gaze target prediction. The results indicate that the proposed approach achieves an F1-score of 0.829 in detecting students' gaze behaviours from video data, with strong performance for laptop-directed gaze and peer-directed gaze, yet weaker performance for other gaze targets. Furthermore, when compared to other supervised machine learning approaches, the proposed method demonstrates superior and more stable performance in complex contexts, highlighting its better cross-configuration robustness. The implications of this approach for supporting students' collaborative learning in real-world environments are also discussed.
Problem

Research questions and friction points this paper is trying to address.

gaze behaviour
collaborative learning
cross-configuration robustness
human annotation
scalable AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

gaze behavior detection
foundation models
zero-shot learning
collaborative learning
cross-configuration robustness
🔎 Similar Papers
No similar papers found.