🤖 AI Summary
To address GPU memory explosion and high inference latency in online video understanding caused by long video streams in multimodal large language models (MLLMs), this paper proposes MSSAVT—a visual token pruning framework tailored for real-time scenarios. Our key contributions are: (1) a redundancy metric integrating spatial proximity and token similarity; (2) a mask-based pruning strategy that decouples pruning decisions from redundancy modeling, eliminating their cyclic dependency; and (3) joint spatiotemporal redundancy elimination coupled with lightweight online inference optimization. Evaluated across multiple benchmarks, MSSAVT achieves up to a 4% accuracy gain, incurs pruning overhead of less than 1 ms, and significantly reduces GPU memory consumption and end-to-end latency. These improvements enable practical deployment in latency-critical applications such as AI-powered smart glasses and intelligent surveillance systems.
📝 Abstract
Online video understanding is essential for applications like public surveillance and AI glasses. However, applying Multimodal Large Language Models (MLLMs) to this domain is challenging due to the large number of video frames, resulting in high GPU memory usage and computational latency. To address these challenges, we propose token pruning as a means to reduce context length while retaining critical information. Specifically, we introduce a novel redundancy metric, Maximum Similarity to Spatially Adjacent Video Tokens (MSSAVT), which accounts for both token similarity and spatial position. To mitigate the bidirectional dependency between pruning and redundancy, we further design a masked pruning strategy that ensures only mutually unadjacent tokens are pruned. We also integrate an existing temporal redundancy-based pruning method to eliminate temporal redundancy of the video modality. Experimental results on multiple online and offline video understanding benchmarks demonstrate that our method significantly improves the accuracy (i.e., by 4% at most) while incurring a negligible pruning latency (i.e., less than 1ms). Our full implementation will be made publicly available.