🤖 AI Summary
This work addresses the tension between long-term temporal memory and real-time inference in existing video large language models under streaming scenarios. The authors propose WAT, a two-stage framework that first constructs hierarchical memory—comprising a short-term buffer and a fixed-capacity long-term memory—during a “Watch” phase, then performs cross-temporal reasoning by retrieving relevant information from long-term memory conditioned on the current query in a subsequent “Think” phase. Key innovations include the Watch-then-Think mechanism, a redundancy-aware eviction strategy for long-term memory, and a context-aware historical frame retrieval method. The study also introduces WAT-85K, the first dataset tailored for streaming video understanding. Evaluated on StreamingBench and OVO-Bench, the model achieves 77.7% and 55.2% accuracy, respectively, significantly outperforming existing open-source online video LLMs while meeting real-time processing requirements.
📝 Abstract
Multimodal Large Language Models (MLLMs) have shown strong capabilities in image understanding, motivating recent efforts to extend them to video reasoning. However, existing Video LLMs struggle in online streaming scenarios, where long temporal context must be preserved under strict memory constraints. We propose WAT (Watching Before Thinking), a two-stage framework for online video reasoning. WAT separates processing into a query-independent watching stage and a query-triggered thinking stage. The watching stage builds a hierarchical memory system with a Short-Term Memory (STM) that buffers recent frames and a fixed-capacity Long-Term Memory (LTM) that maintains a diverse summary of historical content using a redundancy-aware eviction policy. In the thinking stage, a context-aware retrieval mechanism combines the query with the current STM context to retrieve relevant historical frames from the LTM for cross-temporal reasoning. To support training for online video tasks, we introduce WAT-85K, a dataset containing streaming-style annotations emphasizing real-time perception, backward tracing, and forecasting. Experiments show that WAT achieves state-of-the-art performance on online video benchmarks, including 77.7% accuracy on StreamingBench and 55.2% on OVO-Bench, outperforming existing open-source online Video LLMs while operating at real-time frame rates.