OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?

📅 2025-01-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the weak temporal awareness of video large language models (LLMs) in real-time/online video understanding by introducing OVO-Bench—the first benchmark dedicated to online video understanding. Methodologically, it systematically defines and evaluates the temporal awareness capabilities of video LLMs; designs three categories of temporal reasoning tasks—retrospective inference, real-time comprehension, and forward proactive response; and constructs a fine-grained dataset with precise timestamps (644 videos, 12 tasks) via hybrid automatic synthesis and human refinement. Experimental evaluation across nine state-of-the-art video LLMs reveals significantly inferior online understanding performance compared to human baselines, exposing fundamental limitations in dynamic temporal reasoning. Key contributions include: (1) the first comprehensive temporal awareness evaluation framework for video LLMs; (2) the first fine-grained benchmark for online video understanding; and (3) a systematic diagnostic analysis of video LLMs’ temporal reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Temporal Awareness, the ability to reason dynamically based on the timestamp when a question is raised, is the key distinction between offline and online video LLMs. Unlike offline models, which rely on complete videos for static, post hoc analysis, online models process video streams incrementally and dynamically adapt their responses based on the timestamp at which the question is posed. Despite its significance, temporal awareness has not been adequately evaluated in existing benchmarks. To fill this gap, we present OVO-Bench (Online-VideO-Benchmark), a novel video benchmark that emphasizes the importance of timestamps for advanced online video understanding capability benchmarking. OVO-Bench evaluates the ability of video LLMs to reason and respond to events occurring at specific timestamps under three distinct scenarios: (1) Backward tracing: trace back to past events to answer the question. (2) Real-time understanding: understand and respond to events as they unfold at the current timestamp. (3) Forward active responding: delay the response until sufficient future information becomes available to answer the question accurately. OVO-Bench comprises 12 tasks, featuring 644 unique videos and approximately human-curated 2,800 fine-grained meta-annotations with precise timestamps. We combine automated generation pipelines with human curation. With these high-quality samples, we further developed an evaluation pipeline to systematically query video LLMs along the video timeline. Evaluations of nine Video-LLMs reveal that, despite advancements on traditional benchmarks, current models struggle with online video understanding, showing a significant gap compared to human agents. We hope OVO-Bench will drive progress in video LLMs and inspire future research in online video reasoning. Our benchmark and code can be accessed at https://github.com/JoeLeelyf/OVO-Bench.
Problem

Research questions and friction points this paper is trying to address.

Video Understanding
Real-time Content
Temporal Awareness
Innovation

Methods, ideas, or system contributions that make the work stand out.

OVO-Bench
Time-aware Video Understanding
Online Video Evaluation Framework
🔎 Similar Papers
No similar papers found.