🤖 AI Summary
Existing streaming video understanding methods often suffer from irreversible detail loss and fragmented contextual representation due to limited adaptability. To address this, this work proposes a frequency–spatial hybrid memory architecture inspired by the brain’s logarithmic perception and memory consolidation mechanisms. The approach introduces, for the first time, a training-free dual-channel memory system: multi-scale frequency memory (MFM) preserves short-term details, while spatial thumbnail memory (STM) maintains long-term coherence. These components are further enhanced by adaptive compression, frequency-domain projection, and residual reconstruction strategies. Evaluated on StreamingBench, OV-Bench, and OVO-Bench, the method achieves performance gains of 5.20%, 4.52%, and 2.34%, respectively, surpassing several fully fine-tuned models.
📝 Abstract
Transitioning Multimodal Large Language Models (MLLMs) from offline to online streaming video understanding is essential for continuous perception. However, existing methods lack flexible adaptivity, leading to irreversible detail loss and context fragmentation. To resolve this, we propose FreshMem, a Frequency-Space Hybrid Memory network inspired by the brain's logarithmic perception and memory consolidation. FreshMem reconciles short-term fidelity with long-term coherence through two synergistic modules: Multi-scale Frequency Memory (MFM), which projects overflowing frames into representative frequency coefficients, complemented by residual details to reconstruct a global historical"gist"; and Space Thumbnail Memory (STM), which discretizes the continuous stream into episodic clusters by employing an adaptive compression strategy to distill them into high-density space thumbnails. Extensive experiments show that FreshMem significantly boosts the Qwen2-VL baseline, yielding gains of 5.20%, 4.52%, and 2.34% on StreamingBench, OV-Bench, and OVO-Bench, respectively. As a training-free solution, FreshMem outperforms several fully fine-tuned methods, offering a highly efficient paradigm for long-horizon streaming video understanding.