Scalable and Explainable Learner-Video Interaction Prediction using Multimodal Large Language Models

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of scalably and interpretably predicting learner interaction behaviors—such as pausing or skipping—before video release to assess cognitive load and instructional design quality. The authors propose the first approach that integrates multimodal large language models (MLLMs) with Testing with Concept Activation Vectors (TCAV), extracting video embeddings and training neural classifiers to predict fine-grained, population-level interaction peaks across disciplines. Evaluated on 66 courses encompassing 77 million interaction events, the model accurately forecasts interaction spikes, generalizes to unseen subjects, and encodes interpretable instructional features aligned with multimedia learning theory, thereby enabling proactive evaluation of pedagogical design.
📝 Abstract
Learners' use of video controls in educational videos provides implicit signals of cognitive processing and instructional design quality, yet the lack of scalable and explainable predictive models limits instructors' ability to anticipate such behavior before deployment. We propose a scalable, interpretable pipeline for predicting population-level watching, pausing, skipping, and rewinding behavior as proxies for cognitive load from video content alone. Our approach leverages multimodal large language models (MLLMs) to compute embeddings of short video segments and trains a neural classifier to identify temporally fine-grained interaction peaks. Drawing from multimedia learning theory on instructional design for optimal cognitive load, we code features of the video segments using GPT-5 and employ them as a basis for interpreting model predictions via concept activation vectors. We evaluate our pipeline on 77 million video control events from 66 online courses. Our findings demonstrate that classifiers based on MLLM embeddings reliably predict interaction peaks, generalize to unseen academic fields, and encode interpretable, theory-relevant instructional concepts. Overall, our results show the feasibility of cost-efficient, interpretable pre-screening of educational video design and open new opportunities to empirically examine multimedia learning theory at scale.
Problem

Research questions and friction points this paper is trying to address.

learner-video interaction
cognitive load
educational video
predictive modeling
instructional design
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal large language models
learner-video interaction
cognitive load prediction
interpretable AI
educational video design
D
Dominik Glandorf
EPFL, Switzerland
F
Fares Fawzi
EPFL, Switzerland
Tanja Käser
Tanja Käser
Tenure Track Assistant Professor
Educational Data MiningAI for EducationLearning Analytics