Vista: Scene-Aware Optimization for Streaming Video Question Answering under Post-Hoc Queries

📅 2026-02-09
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a scene-aware streaming video question answering framework to address context loss and memory overflow caused by the continuous arrival of video frames and arbitrary-time queries. By integrating dynamic scene segmentation, GPU/CPU-coordinated compressed storage, and a query-driven selective recall mechanism, the approach enables efficient, low-latency comprehension of long videos. Key innovations include dynamic video segmentation based on scene clustering, heterogeneous memory management, index-based retrieval, and a model-agnostic architecture. Evaluated on StreamingBench, the method achieves state-of-the-art performance, significantly enhancing system scalability and inference completeness.

Technology Category

Application Category

📝 Abstract
Streaming video question answering (Streaming Video QA) poses distinct challenges for multimodal large language models (MLLMs), as video frames arrive sequentially and user queries can be issued at arbitrary time points. Existing solutions relying on fixed-size memory or naive compression often suffer from context loss or memory overflow, limiting their effectiveness in long-form, real-time scenarios. We present Vista, a novel framework for scene-aware streaming video QA that enables efficient and scalable reasoning over continuous video streams. The innovation of Vista can be summarized in three aspects: (1) scene-aware segmentation, where Vista dynamically clusters incoming frames into temporally and visually coherent scene units; (2) scene-aware compression, where each scene is compressed into a compact token representation and stored in GPU memory for efficient index-based retrieval, while full-resolution frames are offloaded to CPU memory; and (3) scene-aware recall, where relevant scenes are selectively recalled and reintegrated into the model input upon receiving a query, enabling both efficiency and completeness. Vista is model-agnostic and integrates seamlessly with a variety of vision-language backbones, enabling long-context reasoning without compromising latency or memory efficiency. Extensive experiments on StreamingBench demonstrate that Vista achieves state-of-the-art performance, establishing a strong baseline for real-world streaming video understanding.
Problem

Research questions and friction points this paper is trying to address.

Streaming Video QA
Multimodal Large Language Models
Long-form Video Understanding
Real-time Video Processing
Memory Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

scene-aware segmentation
scene-aware compression
scene-aware recall
streaming video QA
multimodal large language models
🔎 Similar Papers
No similar papers found.
H
Haocheng Lu
Huazhong University of Science and Technology
Nan Zhang
Nan Zhang
Ping An Technology (Shenzhen) Co., Ltd.
Wei Tao
Wei Tao
Huazhong University of Science and Technology
QuantizationLLMTime-Series
X
Xiaoyang Qu
Ping An Technology (Shenzhen) Co., Ltd.
G
Guokuan Li
Huazhong University of Science and Technology
J
Jiguang Wan
Huazhong University of Science and Technology
Jianzong Wang
Jianzong Wang
Postdoctoral Researcher of Department of Electrical and Computer Engineering, University of Florida
Big DataStorage SystemCloud Computing