🤖 AI Summary
Existing approaches struggle to capture the complex dynamics of learner collaboration in educational settings in real time. This work proposes a novel paradigm that integrates lightweight wearable Internet-of-Things (IoT) devices with large language models (LLMs) to address this challenge. By leveraging multimodal sensing—encompassing audio, visual, motion, and depth data—the system continuously collects collaborative behavior streams, which are then transformed into structured features and fed into an LLM grounded in established learning theories. The LLM generates high-level, theory-consistent narrative analyses of collaborative interactions. To the best of our knowledge, this is the first approach to combine wearable IoT and LLMs for educational collaboration analysis, significantly lowering deployment barriers while effectively capturing rich traces of collaborative activity. Empirical validation demonstrates the system’s feasibility and analytical insight in authentic educational environments.
📝 Abstract
We present BadgeX, a novel system integrating lightweight wearable IoT devices (smart badges/smartphones) with Large Language Models (LLMs) to enable real-time collaborative learning analytics. The system captures multimodal sensor data (e.g., audio, image, motion, depth) from learners, processes it into structured features, and employs an LLM-driven framework to interpret these features, generating high-level insights grounded in learning theory. A pilot study demonstrated the system's capability to capture rich collaboration traces and for an LLM to produce plausible, theoretically coherent narrative analyses from sensor-derived features. BadgeX aims to lower deployment barriers, making complex collaborative dynamics visible and offering a pathway for real-time support in educational settings.