LiViBench: An Omnimodal Benchmark for Interactive Livestream Video Understanding

📅 2026-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing video understanding benchmarks, which predominantly focus on non-interactive videos and fail to evaluate multimodal interactive live-streaming content incorporating audio, speech, and real-time danmaku (scrolling comments). To bridge this gap, we propose LiViBench—the first comprehensive multimodal benchmark for interactive live videos—encompassing 24 diverse tasks. LiViBench introduces an innovative annotation pipeline featuring multi-agent collaborative labeling and a seed-question-driven approach to ensure high-quality data, along with a novel Video–Danmaku Retrieval (VCR) module. Building upon this benchmark, we develop LiVi-LLM-7B, a large language model fine-tuned via a two-stage instruction tuning strategy that substantially enhances its comprehension of live-streaming scenarios. The model outperforms open-source counterparts up to 72B parameters on LiViBench, narrows the performance gap with leading closed-source models, and achieves significant gains on general-purpose benchmarks such as VideoMME and LongVideoBench.

Technology Category

Application Category

📝 Abstract
The development of multimodal large language models (MLLMs) has advanced general video understanding. However, existing video evaluation benchmarks primarily focus on non-interactive videos, such as movies and recordings. To fill this gap, this paper proposes the first omnimodal benchmark for interactive livestream videos, LiViBench. It features a diverse set of 24 tasks, highlighting the perceptual, reasoning, and livestream-specific challenges. To efficiently construct the dataset, we design a standardized semi-automatic annotation workflow that incorporates the human-in-the-loop at multiple stages. The workflow leverages multiple MLLMs to form a multi-agent system for comprehensive video description and uses a seed-question-driven method to construct high-quality annotations. All interactive videos in the benchmark include audio, speech, and real-time comments modalities. To enhance models'understanding of interactive videos, we design tailored two-stage instruction-tuning and propose a Video-to-Comment Retrieval (VCR) module to improve the model's ability to utilize real-time comments. Based on these advancements, we develop LiVi-LLM-7B, an MLLM with enhanced knowledge of interactive livestreams. Experiments show that our model outperforms larger open-source models with up to 72B parameters, narrows the gap with leading proprietary models on LiViBench, and achieves enhanced performance on general video benchmarks, including VideoMME, LongVideoBench, MLVU, and VideoEval-Pro.
Problem

Research questions and friction points this paper is trying to address.

interactive livestream video
video understanding benchmark
multimodal large language models
real-time comments
omnimodal evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

omnimodal benchmark
interactive livestream video
multi-agent MLLM annotation
Video-to-Comment Retrieval (VCR)
two-stage instruction-tuning
🔎 Similar Papers
No similar papers found.
Xiaodong Wang
Xiaodong Wang
Peking University
generative modelscomputer vision
L
Langling Huang
School of Electronic and Computer Engineering, Peking University
Z
Zhirong Wu
School of Electronic and Computer Engineering, Peking University
X
Xu Zhao
Douyin Group
Teng Xu
Teng Xu
Graduate Student, ShanghaiTech University
Computer VisionComputer Graphics
X
Xuhong Xia
Douyin Group
P
Peixi Peng
School of Electronic and Computer Engineering, Peking University