StreamingAssistant: Efficient Visual Token Pruning for Accelerating Online Video Understanding

📅 2025-12-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address GPU memory explosion and high inference latency in online video understanding caused by long video streams in multimodal large language models (MLLMs), this paper proposes MSSAVT—a visual token pruning framework tailored for real-time scenarios. Our key contributions are: (1) a redundancy metric integrating spatial proximity and token similarity; (2) a mask-based pruning strategy that decouples pruning decisions from redundancy modeling, eliminating their cyclic dependency; and (3) joint spatiotemporal redundancy elimination coupled with lightweight online inference optimization. Evaluated across multiple benchmarks, MSSAVT achieves up to a 4% accuracy gain, incurs pruning overhead of less than 1 ms, and significantly reduces GPU memory consumption and end-to-end latency. These improvements enable practical deployment in latency-critical applications such as AI-powered smart glasses and intelligent surveillance systems.

Technology Category

Application Category

📝 Abstract
Online video understanding is essential for applications like public surveillance and AI glasses. However, applying Multimodal Large Language Models (MLLMs) to this domain is challenging due to the large number of video frames, resulting in high GPU memory usage and computational latency. To address these challenges, we propose token pruning as a means to reduce context length while retaining critical information. Specifically, we introduce a novel redundancy metric, Maximum Similarity to Spatially Adjacent Video Tokens (MSSAVT), which accounts for both token similarity and spatial position. To mitigate the bidirectional dependency between pruning and redundancy, we further design a masked pruning strategy that ensures only mutually unadjacent tokens are pruned. We also integrate an existing temporal redundancy-based pruning method to eliminate temporal redundancy of the video modality. Experimental results on multiple online and offline video understanding benchmarks demonstrate that our method significantly improves the accuracy (i.e., by 4% at most) while incurring a negligible pruning latency (i.e., less than 1ms). Our full implementation will be made publicly available.
Problem

Research questions and friction points this paper is trying to address.

Reduces GPU memory usage in video understanding
Decreases computational latency for online video analysis
Prunes redundant tokens while preserving critical information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token pruning reduces context length for efficiency
MSSAVT metric assesses redundancy using similarity and position
Masked pruning strategy eliminates bidirectional dependency issues
🔎 Similar Papers
2024-02-20International Conference on Machine LearningCitations: 30
X
Xinqi Jin
Tsinghua University
Hanxun Yu
Hanxun Yu
Zhejiang University
2D/3D Multi-modal LLMsEfficient MLLMsSpatial Intelligence
B
Bohan Yu
Ant Group
K
Kebin Liu
Tsinghua University
J
Jian Liu
Ant Group
Keda Tao
Keda Tao
Westlake University
Generative ModelComputer VisionMLLM
Y
Yixuan Pei
Ant Group
H
Huan Wang
Westlake University
F
Fan Dang
Beijing Jiaotong University
Jiangchuan Liu
Jiangchuan Liu
Professor, Simon Fraser University; Fellow of IEEE, Royal Society of Canada, Canadian Academy of Eng
Computer Science
W
Weiqiang Wang
Ant Group