🤖 AI Summary
This work addresses the lack of systematic evaluation of large multimodal language models (MLLMs) on long-duration, knowledge-intensive instructional videos with explicit temporal structure. To this end, we introduce LEMON, the first comprehensive benchmark tailored to STEM lecture videos, featuring six core tasks and twelve subtasks that emphasize semantic richness, tight cross-modal coupling, explicit temporal and pedagogical structure, and multi-turn contextual dependencies. Built upon 2,277 real-world lecture video segments and 4,181 high-quality question-answer pairs, LEMON supports both multiple-choice and open-ended questions, enabling assessment of cross-modal alignment and temporal reasoning. Experiments reveal that even state-of-the-art models such as GPT-4o exhibit significant performance gaps in temporal reasoning and predicting instructional content, establishing LEMON as a challenging benchmark for future research.
📝 Abstract
Recent multimodal large language models (MLLMs) have shown remarkable progress across vision, audio, and language tasks, yet their performance on long-form, knowledge-intensive, and temporally structured educational content remains largely unexplored. To bridge this gap, we introduce LEMON, a Lecture-based Evaluation benchmark for MultimOdal uNderstanding, focusing on STEM lecture videos that require long-horizon reasoning and cross-modal integration. LEMON comprises 2,277 video segments spanning 5 disciplines and 29 courses, with an average duration of 196.1 seconds, yielding 4,181 high-quality QA pairs, including 3,413 multiple-choice and 768 open-ended questions. Distinct from existing video benchmarks, LEMON features: (1) semantic richness and disciplinary density, (2) tightly coupled video-audio-text modalities, (3) explicit temporal and pedagogical structure, and (4) contextually linked multi-turn questioning. It further encompasses six major tasks and twelve subtasks, covering the full cognitive spectrum from perception to reasoning and then to generation. Comprehensive experiments reveal substantial performance gaps across tasks, highlighting that even state-of-the-art MLLMs like GPT-4o struggle with temporal reasoning and instructional prediction. We expect LEMON to serve as an extensible and challenging benchmark for advancing multimodal perception, reasoning, and generation in long-form instructional contents.