FreshMem: Brain-Inspired Frequency-Space Hybrid Memory for Streaming Video Understanding

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing streaming video understanding methods often suffer from irreversible detail loss and fragmented contextual representation due to limited adaptability. To address this, this work proposes a frequency–spatial hybrid memory architecture inspired by the brain’s logarithmic perception and memory consolidation mechanisms. The approach introduces, for the first time, a training-free dual-channel memory system: multi-scale frequency memory (MFM) preserves short-term details, while spatial thumbnail memory (STM) maintains long-term coherence. These components are further enhanced by adaptive compression, frequency-domain projection, and residual reconstruction strategies. Evaluated on StreamingBench, OV-Bench, and OVO-Bench, the method achieves performance gains of 5.20%, 4.52%, and 2.34%, respectively, surpassing several fully fine-tuned models.

Technology Category

Application Category

📝 Abstract
Transitioning Multimodal Large Language Models (MLLMs) from offline to online streaming video understanding is essential for continuous perception. However, existing methods lack flexible adaptivity, leading to irreversible detail loss and context fragmentation. To resolve this, we propose FreshMem, a Frequency-Space Hybrid Memory network inspired by the brain's logarithmic perception and memory consolidation. FreshMem reconciles short-term fidelity with long-term coherence through two synergistic modules: Multi-scale Frequency Memory (MFM), which projects overflowing frames into representative frequency coefficients, complemented by residual details to reconstruct a global historical"gist"; and Space Thumbnail Memory (STM), which discretizes the continuous stream into episodic clusters by employing an adaptive compression strategy to distill them into high-density space thumbnails. Extensive experiments show that FreshMem significantly boosts the Qwen2-VL baseline, yielding gains of 5.20%, 4.52%, and 2.34% on StreamingBench, OV-Bench, and OVO-Bench, respectively. As a training-free solution, FreshMem outperforms several fully fine-tuned methods, offering a highly efficient paradigm for long-horizon streaming video understanding.
Problem

Research questions and friction points this paper is trying to address.

streaming video understanding
multimodal large language models
context fragmentation
detail loss
online perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

Frequency-Space Hybrid Memory
Streaming Video Understanding
Brain-Inspired Memory
Training-Free Adaptation
Multimodal Large Language Models
🔎 Similar Papers
No similar papers found.
Kangcong Li
Kangcong Li
Fudan University
P
Peng Ye
Shanghai Artificial Intelligence Laboratory, Shanghai, China; The Chinese University of Hong Kong, Hong Kong, China
Lin Zhang
Lin Zhang
Fudan University
C
Chao Wang
College of Future Information Technology, Fudan University, Shanghai, China
Huafeng Qin
Huafeng Qin
Chongqing Technology and Business University
Biometrics (e.g veinface and gait)computer visionand machine learning
Tao Chen
Tao Chen
Fudan University
Deep LearningMedical Image Segmentation