MMOU: A Massive Multi-Task Omni Understanding and Reasoning Benchmark for Long and Complex Real-World Videos

📅 2026-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that existing models struggle to perform effective cross-modal and cross-temporal joint understanding and reasoning over vision, audio, and text in long-duration, complex real-world videos, compounded by the absence of a systematic evaluation benchmark. To bridge this gap, we propose the first comprehensive evaluation framework for full-modal understanding and reasoning in long videos, encompassing 13 core capabilities, 9,038 videos, and 15,000 high-quality human-annotated questions. Through multi-round annotation, multimodal alignment, and carefully designed cross-modal tasks, we systematically evaluate over 20 state-of-the-art multimodal large language models. Our results reveal that even the strongest closed-source model achieves only 64.2% accuracy, while the best open-source model reaches 46.8%, highlighting significant performance bottlenecks and systematic failure modes in current approaches to full-modal long-video understanding.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have shown strong performance in visual and audio understanding when evaluated in isolation. However, their ability to jointly reason over omni-modal (visual, audio, and textual) signals in long and complex videos remains largely unexplored. We introduce MMOU, a new benchmark designed to systematically evaluate multimodal understanding and reasoning under these challenging, real-world conditions. MMOU consists of 15,000 carefully curated questions paired with 9038 web-collected videos of varying length, spanning diverse domains and exhibiting rich, tightly coupled audio-visual content. The benchmark covers 13 fundamental skill categories, all of which require integrating evidence across modalities and time. All questions are manually annotated across multiple turns by professional annotators, ensuring high quality and reasoning fidelity. We evaluate 20+ state-of-the-art open-source and proprietary multimodal models on MMOU. The results expose substantial performance gaps: the best closed-source model achieves only 64.2% accuracy, while the strongest open-source model reaches just 46.8%. Our results highlight the challenges of long-form omni-modal understanding, revealing that current models frequently fail to apply even fundamental skills in long videos. Through detailed analysis, we further identify systematic failure modes and provide insights into where and why current models break.
Problem

Research questions and friction points this paper is trying to address.

multimodal reasoning
long-form video understanding
omni-modal integration
audio-visual-text comprehension
multimodal benchmark
Innovation

Methods, ideas, or system contributions that make the work stand out.

omni-modal reasoning
long-form video understanding
multimodal benchmark
cross-modal integration
multimodal large language models
🔎 Similar Papers
No similar papers found.
Arushi Goel
Arushi Goel
Research Scientist, NVIDIA
Computer VisionMachine LearningVision and Language
Sreyan Ghosh
Sreyan Ghosh
Ph.D. in CS at University of Maryland, College Park
AIMachine LearningNLPSpeech Recognition
Vatsal Agarwal
Vatsal Agarwal
PhD @ UMD CS
Computer VisionDeep Learning
Nishit Anand
Nishit Anand
MS CS at University of Maryland, College Park
Machine LearningComputer VisionNatural Language ProcessingSpeech Recognition
K
Kaousheik Jayakumar
University of Maryland, College Park, USA
L
Lasha Koroshinadze
University of Maryland, College Park, USA
Y
Yao Xu
NVIDIA, USA
K
Katie Lyons
NVIDIA, USA
J
James Case
NVIDIA, USA
Karan Sapra
Karan Sapra
Clemson University, NVIDIA
Deep LearningHigh Performance ComputingImage ProcessingGenomicsCoexpression Networks
Kevin J. Shih
Kevin J. Shih
Research Scientist, NVIDIA
Computer VisionMachine LearningRoboticsImage Processing
Siddharth Gururani
Siddharth Gururani
NVIDIA Research
Artificial IntelligenceMusic Information RetrievalMachine LearningDeep LearningText to Speech
Abhinav Shrivastava
Abhinav Shrivastava
Associate Professor, University of Maryland, College Park
Computer VisionMachine LearningRobotics
Ramani Duraiswami
Ramani Duraiswami
Computer Science and UMIACS, University of Maryland
Scientific ComputingSpatial AudioMachine LearningComputational Electromagnetics
Dinesh Manocha
Dinesh Manocha
Distinguished University Professor, University of Maryland at College Park
computer graphicsgeometric modelingmotion planningvirtual realityrobotics
Andrew Tao
Andrew Tao
Nvidia
Computer VisionMachine Learning
Bryan Catanzaro
Bryan Catanzaro
NVIDIA
Parallel ComputingMachine Learning
Mohammad Shoeybi
Mohammad Shoeybi
Senior Director of Applied Research at NVIDIA
Large Language ModelsNLPMulti-Modal ModelsGenerative AI
Wei Ping
Wei Ping
Distinguished Research Scientist, NVIDIA
machine learninglarge language modelsspeech synthesisreinforcement learning