🤖 AI Summary
This study proposes a vision-language model (VLM) approach grounded in the ICAP framework to automatically analyze screen-based behaviors in collaborative learning settings, thereby capturing underlying cognitive and collaborative processes as an alternative to time-consuming manual coding. It presents the first systematic comparison between single-agent and multi-agent VLMs and introduces two interpretable, scalable multi-agent architectures: a workflow-based three-agent system and an autonomous decision-making system inspired by ReAct. These frameworks integrate scene segmentation, cursor-aware prompting, and a reasoning-action-correction loop. Experimental results demonstrate that multi-agent systems consistently outperform their single-agent counterpart, with the workflow-based system achieving superior performance in scene detection and the autonomous decision-making system excelling in action recognition.
📝 Abstract
On-screen learning behavior provides valuable insights into how students seek, use, and create information during learning. Analyzing on-screen behavioral engagement is essential for capturing students' cognitive and collaborative processes. The recent development of Vision Language Models (VLMs) offers new opportunities to automate the labor-intensive manual coding often required for multimodal video data analysis. In this study, we compared the performance of both leading closed-source VLMs (Claude-3.7-Sonnet, GPT-4.1) and open-source VLM (Qwen2.5-VL-72B) in single- and multi-agent settings for automated coding of screen recordings in collaborative learning contexts based on the ICAP framework. In particular, we proposed and compared two multi-agent frameworks: 1) a three-agent workflow multi-agent system (MAS) that segments screen videos by scene and detects on-screen behaviors using cursor-informed VLM prompting with evidence-based verification; 2) an autonomous-decision MAS inspired by ReAct that iteratively interleaves reasoning, tool-like operations (segmentation/ classification/ validation), and observation-driven self-correction to produce interpretable on-screen behavior labels. Experimental results demonstrated that the two proposed MAS frameworks achieved viable performance, outperforming the single VLMs in scene and action detection tasks. It is worth noting that the workflow-based agent achieved best on scene detection, and the autonomous-decision MAS achieved best on action detection. This study demonstrates the effectiveness of VLM-based Multi-agent System for video analysis and contributes a scalable framework for multimodal data analytics.