RIVER: A Real-Time Interaction Benchmark for Video LLMs

📅 2026-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of current video large language models, which are primarily designed for offline processing and lack real-time interactivity, rendering them ill-suited for dynamic, continuous video understanding tasks. To bridge this gap, the authors introduce RIVER Bench, a novel benchmark that establishes the first evaluation framework tailored for real-time video interaction. The framework simulates realistic interactive scenarios through three core capabilities: retrospective memory, real-time perception, and proactive prediction. Built upon meticulously annotated, multi-source heterogeneous video data and a task-driven interaction protocol, RIVER Bench integrates memory mechanisms with forward-looking reasoning modules, thereby defining a new paradigm for online video understanding. Experimental results demonstrate that existing offline models perform significantly worse on real-time tasks, whereas the proposed approach effectively enhances interactive performance in long-term memory retention and future prediction.

Technology Category

Application Category

📝 Abstract
The rapid advancement of multimodal large language models has demonstrated impressive capabilities, yet nearly all operate in an offline paradigm, hindering real-time interactivity. Addressing this gap, we introduce the Real-tIme Video intERaction Bench (RIVER Bench), designed for evaluating online video comprehension. RIVER Bench introduces a novel framework comprising Retrospective Memory, Live-Perception, and Proactive Anticipation tasks, closely mimicking interactive dialogues rather than responding to entire videos at once. We conducted detailed annotations using videos from diverse sources and varying lengths, and precisely defined the real-time interactive format. Evaluations across various model categories reveal that while offline models perform well in single question-answering tasks, they struggle with real-time processing. Addressing the limitations of existing models in online video interaction, especially their deficiencies in long-term memory and future perception, we proposed a general improvement method that enables models to interact with users more flexibly in real time. We believe this work will significantly advance the development of real-time interactive video understanding models and inspire future research in this emerging field. Datasets and code are publicly available at https://github.com/OpenGVLab/RIVER.
Problem

Research questions and friction points this paper is trying to address.

real-time interaction
video understanding
multimodal LLMs
online comprehension
interactive dialogue
Innovation

Methods, ideas, or system contributions that make the work stand out.

Real-time Interaction
Video LLMs
Retrospective Memory
Live Perception
Proactive Anticipation
🔎 Similar Papers
Y
Yansong Shi
School of Information Science And Technology, University of Science and Technology of China; Shanghai Artificial Intelligence Laboratory
Qingsong Zhao
Qingsong Zhao
tongji
Machine LearningComputer Vision
T
Tianxiang Jiang
School of Information Science And Technology, University of Science and Technology of China; Shanghai Artificial Intelligence Laboratory
Xiangyu Zeng
Xiangyu Zeng
Nanjing University; Shanghai AI Laboratory
Computer VisionMLLM
Yi Wang
Yi Wang
Shanghai AI Laboratory
Computer VisionPattern Recognition
Limin Wang
Limin Wang
Nanjing University
Computer VisionAction RecognitionVideo Understanding