FluxMem: Adaptive Hierarchical Memory for Streaming Video Understanding

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost and memory consumption in streaming video understanding caused by redundant visual information by proposing a training-free, adaptive hierarchical memory compression framework. The method employs a two-stage architecture comprising Temporal Adjacency Selection (TAS) and Spatial Domain Consolidation (SDC), which dynamically adjusts token compression rates based on scene statistics to enable efficient online processing without manual hyperparameter tuning. Evaluated on StreamingBench and OVO-Bench, the approach achieves state-of-the-art performance with scores of 76.4 and 67.2, respectively, while reducing latency by 69.9% and peak GPU memory usage by 34.5%. On MLVU, it attains a competitive accuracy of 73.1 using 65% fewer visual tokens, demonstrating significant efficiency gains without compromising offline accuracy.

Technology Category

Application Category

📝 Abstract
This paper presents FluxMem, a training-free framework for efficient streaming video understanding. FluxMem adaptively compresses redundant visual memory through a hierarchical, two-stage design: (1) a Temporal Adjacency Selection (TAS) module removes redundant visual tokens across adjacent frames, and (2) a Spatial Domain Consolidation (SDC) module further merges spatially repetitive regions within each frame into compact representations. To adapt effectively to dynamic scenes, we introduce a self-adaptive token compression mechanism in both TAS and SDC, which automatically determines the compression rate based on intrinsic scene statistics rather than manual tuning. Extensive experiments demonstrate that FluxMem achieves new state-of-the-art results on existing online video benchmarks, reaching 76.4 on StreamingBench and 67.2 on OVO-Bench under real-time settings, while reducing latency by 69.9% and peak GPU memory by 34.5% on OVO-Bench. Furthermore, it maintains strong offline performance, achieving 73.1 on MLVU while using 65% fewer visual tokens.
Problem

Research questions and friction points this paper is trying to address.

streaming video understanding
visual memory compression
real-time inference
GPU memory efficiency
redundancy reduction
Innovation

Methods, ideas, or system contributions that make the work stand out.

adaptive compression
hierarchical memory
streaming video understanding
training-free framework
visual token reduction
🔎 Similar Papers
No similar papers found.
Y
Yiweng Xie
Institute of Trustworthy Embodied AI, Fudan University; Shanghai Innovation Institute; Shanghai Key Laboratory of Multimodal Embodied AI
Bo He
Bo He
University of Maryland, College Park
Video Understanding
Junke Wang
Junke Wang
Fudan University
Computer Vision
X
Xiangyu Zheng
Institute of Trustworthy Embodied AI, Fudan University; Shanghai Key Laboratory of Multimodal Embodied AI
Z
Ziyi Ye
Institute of Trustworthy Embodied AI, Fudan University; Shanghai Key Laboratory of Multimodal Embodied AI
Zuxuan Wu
Zuxuan Wu
Fudan University