🤖 AI Summary
Existing visual parsing datasets suffer from coarse granularity, insufficient coverage of educational domains, and a lack of procedural logical modeling—severely hindering educational video understanding. To address these limitations, we introduce PhysExp, the first multi-granularity video benchmark tailored for secondary-school physics experiment instruction, comprising 620 authentic, long-form classroom videos spanning four core experimental domains (e.g., mechanics, electromagnetism). We propose the first hierarchical fine-grained annotation schema, integrating temporal action localization, human-object interaction (HOI) graphs, instrument-level semantic segmentation, and step-wise experimental procedure flowcharts. This bridges critical gaps in structured task modeling and scientific instrument interaction representation for procedural educational videos. Leveraging PhysExp, we establish a strong baseline model and systematically identify key challenges in educational video parsing. We publicly release both the dataset and an evaluation toolkit to advance interdisciplinary research at the intersection of vision understanding and intelligent education.
📝 Abstract
Visual parsing of images and videos is critical for a wide range of real-world applications. However, progress in this field is constrained by limitations of existing datasets: (1) insufficient annotation granularity, which impedes fine-grained scene understanding and high-level reasoning; (2) limited coverage of domains, particularly a lack of datasets tailored for educational scenarios; and (3) lack of explicit procedural guidance, with minimal logical rules and insufficient representation of structured task process. To address these gaps, we introduce PhysLab, the first video dataset that captures students conducting complex physics experiments. The dataset includes four representative experiments that feature diverse scientific instruments and rich human-object interaction (HOI) patterns. PhysLab comprises 620 long-form videos and provides multilevel annotations that support a variety of vision tasks, including action recognition, object detection, HOI analysis, etc. We establish strong baselines and perform extensive evaluations to highlight key challenges in the parsing of procedural educational videos. We expect PhysLab to serve as a valuable resource for advancing fine-grained visual parsing, facilitating intelligent classroom systems, and fostering closer integration between computer vision and educational technologies. The dataset and the evaluation toolkit are publicly available at https://github.com/ZMH-SDUST/PhysLab.