🤖 AI Summary
Existing large vision-language models (LVLMs) lack systematic evaluation of long-term temporal reasoning for streaming video understanding; mainstream benchmarks focus on single-frame or single-instance question answering, neglecting continuous spatiotemporal inference.
Method: We introduce SVBench—the first streaming video temporal multi-turn dialogue benchmark—comprising 1,353 long videos and 49,979 timestamped QA chains. We propose novel cross-segment temporal linkage modeling and temporal QA chain generation. Further, we develop StreamingChat, the first open-source LVLM achieving significant progress in streaming video understanding, trained via a semi-automatic annotation pipeline and joint fine-tuning of Qwen-VL and Video-LLaMA.
Contribution/Results: Our evaluation across 14 models reveals critical bottlenecks in long-context temporal reasoning. StreamingChat achieves state-of-the-art performance on SVBench among open-source LVLMs while maintaining competitive results on general vision-and-language benchmarks.
📝 Abstract
Despite the significant advancements of Large Vision-Language Models (LVLMs) on established benchmarks, there remains a notable gap in suitable evaluation regarding their applicability in the emerging domain of long-context streaming video understanding. Current benchmarks for video understanding typically emphasize isolated single-instance text inputs and fail to evaluate the capacity to sustain temporal reasoning throughout the entire duration of video streams. To address these limitations, we introduce SVBench, a pioneering benchmark with temporal multi-turn question-answering chains specifically designed to thoroughly assess the capabilities of streaming video understanding of current LVLMs. We design a semi-automated annotation pipeline to obtain 49,979 Question-Answer (QA) pairs of 1,353 streaming videos, which includes generating QA chains that represent a series of consecutive multi-turn dialogues over video segments and constructing temporal linkages between successive QA chains. Our experimental results, obtained from 14 models in dialogue and streaming evaluations, reveal that while the closed-source GPT-4o outperforms others, most open-source LVLMs struggle with long-context streaming video understanding. We also construct a StreamingChat model, which significantly outperforms open-source LVLMs on our SVBench and achieves comparable performance on diverse vision-language benchmarks. We expect SVBench to advance the research of streaming video understanding by providing a comprehensive and in-depth analysis of current LVLMs. Our benchmark and model can be accessed at https://yzy-bupt.github.io/SVBench.