WorldMM: Dynamic Multimodal Memory Agent for Long Video Reasoning

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Long-form video understanding is hindered by limited model context windows and loss of fine-grained visual details during video abstraction. Existing memory-augmented approaches rely excessively on textual summaries and employ fixed-time-scale retrieval, impeding cross-granularity event modeling and complex visual reasoning. To address these limitations, we propose a dynamic multimodal memory mechanism comprising three complementary memory modules—event-level, semantic-level, and visual-level—integrated with multi-granularity memory indexing, visual feature preservation, and query-driven iterative retrieval. This architecture enables adaptive, cross-temporal-scale retrieval, overcoming the constraints of text-centric paradigms and rigid temporal granularity. Evaluated on five long-video question-answering benchmarks, our method achieves an average 8.4% improvement over state-of-the-art methods, significantly enhancing fine-grained comprehension for videos spanning hours to days.

Technology Category

Application Category

📝 Abstract
Recent advances in video large language models have demonstrated strong capabilities in understanding short clips. However, scaling them to hours- or days-long videos remains highly challenging due to limited context capacity and the loss of critical visual details during abstraction. Existing memory-augmented methods mitigate this by leveraging textual summaries of video segments, yet they heavily rely on text and fail to utilize visual evidence when reasoning over complex scenes. Moreover, retrieving from fixed temporal scales further limits their flexibility in capturing events that span variable durations. To address this, we introduce WorldMM, a novel multimodal memory agent that constructs and retrieves from multiple complementary memories, encompassing both textual and visual representations. WorldMM comprises three types of memory: episodic memory indexes factual events across multiple temporal scales, semantic memory continuously updates high-level conceptual knowledge, and visual memory preserves detailed information about scenes. During inference, an adaptive retrieval agent iteratively selects the most relevant memory source and leverages multiple temporal granularities based on the query, continuing until it determines that sufficient information has been gathered. WorldMM significantly outperforms existing baselines across five long video question-answering benchmarks, achieving an average 8.4% performance gain over previous state-of-the-art methods, showing its effectiveness on long video reasoning.
Problem

Research questions and friction points this paper is trying to address.

Addresses limited context capacity for hour-long videos in reasoning models
Solves over-reliance on text by incorporating multimodal visual evidence
Enables flexible event capture across variable temporal durations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal memory with textual and visual representations
Adaptive retrieval across multiple temporal granularities
Three complementary memory types: episodic, semantic, visual
🔎 Similar Papers
No similar papers found.