StreamingVLM: Real-Time Understanding for Infinite Video Streams

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high latency, unbounded memory growth, and poor temporal coherence in sliding-window processing when visual language models (VLMs) handle infinite-length video streams, this paper proposes the first streaming video understanding framework with aligned training and inference. Its core innovation is a unified attention cache reuse mechanism: overlapping-segment supervision fine-tuning jointly optimizes reuse of recent visual tokens (short window), textual tokens (long window), and attention source states, yielding a compact, incrementally updatable KV cache. The method enhances general video question answering without additional fine-tuning. On Inf-Streams-Eval (2-hour videos), it achieves a 66.18% win rate over GPT-4O mini; enables real-time inference at 8 FPS on a single H100 GPU; and improves performance by +4.30 and +5.96 on LongVideoBench and OVOBench, respectively.

Technology Category

Application Category

📝 Abstract
Vision-language models (VLMs) could power real-time assistants and autonomous agents, but they face a critical challenge: understanding near-infinite video streams without escalating latency and memory usage. Processing entire videos with full attention leads to quadratic computational costs and poor performance on long videos. Meanwhile, simple sliding window methods are also flawed, as they either break coherence or suffer from high latency due to redundant recomputation. In this paper, we introduce StreamingVLM, a model designed for real-time, stable understanding of infinite visual input. Our approach is a unified framework that aligns training with streaming inference. During inference, we maintain a compact KV cache by reusing states of attention sinks, a short window of recent vision tokens, and a long window of recent text tokens. This streaming ability is instilled via a simple supervised fine-tuning (SFT) strategy that applies full attention on short, overlapped video chunks, which effectively mimics the inference-time attention pattern without training on prohibitively long contexts. For evaluation, we build Inf-Streams-Eval, a new benchmark with videos averaging over two hours that requires dense, per-second alignment between frames and text. On Inf-Streams-Eval, StreamingVLM achieves a 66.18% win rate against GPT-4O mini and maintains stable, real-time performance at up to 8 FPS on a single NVIDIA H100. Notably, our SFT strategy also enhances general VQA abilities without any VQA-specific fine-tuning, improving performance on LongVideoBench by +4.30 and OVOBench Realtime by +5.96. Code is available at https://github.com/mit-han-lab/streaming-vlm.
Problem

Research questions and friction points this paper is trying to address.

Enables real-time understanding of infinite video streams
Reduces computational costs for long video processing
Maintains coherence without redundant recomputation latency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reuses attention sinks and recent tokens for compact KV cache
Applies supervised fine-tuning on overlapped video chunks
Aligns training with streaming inference via unified framework
🔎 Similar Papers
No similar papers found.