$infty$-Video: A Training-Free Approach to Long Video Understanding via Continuous-Time Memory Consolidation

📅 2025-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video-language models suffer from fixed context windows and sparse frame sampling, hindering effective modeling of continuous temporal structure in long videos and leading to critical information loss. To address this, we propose a training-free, length-unbounded Long-Term Memory (LTM) mechanism grounded in continuous-time modeling. Our approach features: (1) “sticky” memory evolution via continuous-time attention, enabling adaptive memory updating and retention; (2) enhanced spatiotemporal representation through video Q-former integration and dynamic-granularity memory allocation; and (3) zero-shot transferability to mainstream architectures—including Video-LLaMA and VideoChat2—without architectural modification or fine-tuning. Evaluated on video question answering, our method achieves significant performance gains over baselines while maintaining computational efficiency. It demonstrates scalable, training-free long-video understanding with minimal overhead, advancing practical deployment of video-language models for extended temporal reasoning.

Technology Category

Application Category

📝 Abstract
Current video-language models struggle with long-video understanding due to limited context lengths and reliance on sparse frame subsampling, often leading to information loss. This paper introduces $infty$-Video, which can process arbitrarily long videos through a continuous-time long-term memory (LTM) consolidation mechanism. Our framework augments video Q-formers by allowing them to process unbounded video contexts efficiently and without requiring additional training. Through continuous attention, our approach dynamically allocates higher granularity to the most relevant video segments, forming"sticky"memories that evolve over time. Experiments with Video-LLaMA and VideoChat2 demonstrate improved performance in video question-answering tasks, showcasing the potential of continuous-time LTM mechanisms to enable scalable and training-free comprehension of long videos.
Problem

Research questions and friction points this paper is trying to address.

Long Video Analysis
Memory Updating
Video-Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Long-Term Memory
Arbitrary Length Videos
Video Understanding
🔎 Similar Papers
No similar papers found.
S
Saul Santos
Instituto de Telecomunicações, Instituto Superior Técnico, Universidade de Lisboa
António Farinhas
António Farinhas
Sword Health
Machine LearningNatural Language Processing
D
Daniel McNamee
Champalimaud Research
A
André F. T. Martins
Instituto de Telecomunicações, Instituto Superior Técnico, Universidade de Lisboa, ELLIS Unit Lisbon, Unbabel