🤖 AI Summary
This work addresses the challenge that existing models struggle to perform effective cross-modal and cross-temporal joint understanding and reasoning over vision, audio, and text in long-duration, complex real-world videos, compounded by the absence of a systematic evaluation benchmark. To bridge this gap, we propose the first comprehensive evaluation framework for full-modal understanding and reasoning in long videos, encompassing 13 core capabilities, 9,038 videos, and 15,000 high-quality human-annotated questions. Through multi-round annotation, multimodal alignment, and carefully designed cross-modal tasks, we systematically evaluate over 20 state-of-the-art multimodal large language models. Our results reveal that even the strongest closed-source model achieves only 64.2% accuracy, while the best open-source model reaches 46.8%, highlighting significant performance bottlenecks and systematic failure modes in current approaches to full-modal long-video understanding.
📝 Abstract
Multimodal Large Language Models (MLLMs) have shown strong performance in visual and audio understanding when evaluated in isolation. However, their ability to jointly reason over omni-modal (visual, audio, and textual) signals in long and complex videos remains largely unexplored. We introduce MMOU, a new benchmark designed to systematically evaluate multimodal understanding and reasoning under these challenging, real-world conditions. MMOU consists of 15,000 carefully curated questions paired with 9038 web-collected videos of varying length, spanning diverse domains and exhibiting rich, tightly coupled audio-visual content. The benchmark covers 13 fundamental skill categories, all of which require integrating evidence across modalities and time. All questions are manually annotated across multiple turns by professional annotators, ensuring high quality and reasoning fidelity. We evaluate 20+ state-of-the-art open-source and proprietary multimodal models on MMOU. The results expose substantial performance gaps: the best closed-source model achieves only 64.2% accuracy, while the strongest open-source model reaches just 46.8%. Our results highlight the challenges of long-form omni-modal understanding, revealing that current models frequently fail to apply even fundamental skills in long videos. Through detailed analysis, we further identify systematic failure modes and provide insights into where and why current models break.