🤖 AI Summary
Existing streaming video understanding methods struggle to simultaneously achieve high comprehension performance, real-time responsiveness, and low GPU memory consumption. This work proposes a training-free, efficient architecture that, for the first time, models key-value (KV) caching as a multi-granularity hierarchical memory mechanism. During inference, the approach enables highly efficient compression and utilization of video information by reusing compact cached representations. Notably, it supports real-time user interaction without additional computation, substantially reducing both latency and memory footprint. Experimental results demonstrate that, compared to state-of-the-art methods, the proposed approach reduces first-token latency by 10×, achieves up to an 11.4% absolute accuracy gain while using 68% fewer video tokens, and matches or exceeds existing benchmarks in overall performance.
📝 Abstract
Recent advancements in Multimodal Large Language Models (MLLMs) have demonstrated significant improvement in offline video understanding. However, extending these capabilities to streaming video inputs, remains challenging, as existing models struggle to simultaneously maintain stable understanding performance, real-time responses, and low GPU memory overhead. To address this challenge, we propose HERMES, a novel training-free architecture for real-time and accurate understanding of video streams. Based on a mechanistic attention investigation, we conceptualize KV cache as a hierarchical memory framework that encapsulates video information across multiple granularities. During inference, HERMES reuses a compact KV cache, enabling efficient streaming understanding under resource constraints. Notably, HERMES requires no auxiliary computations upon the arrival of user queries, thereby guaranteeing real-time responses for continuous video stream interactions, which achieves 10$\times$ faster TTFT compared to prior SOTA. Even when reducing video tokens by up to 68% compared with uniform sampling, HERMES achieves superior or comparable accuracy across all benchmarks, with up to 11.4% gains on streaming datasets.