MMViR: A Multi-Modal and Multi-Granularity Representation for Long-range Video Understanding

📅 2026-01-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficiently understanding long-form videos with complex events, diverse scenes, and long-range dependencies using multimodal large language models. Direct encoding incurs prohibitive computational costs, while naive video-to-text conversion often results in redundant or fragmented information. To overcome these limitations, the authors propose a multimodal, multi-granular structured representation that segments videos at key turning points and constructs a three-tiered descriptive hierarchy encompassing global narrative, event-level summaries, and fine-grained visual details. This approach uniquely integrates critical event segmentation with hierarchical semantic representation, achieving both semantic completeness and substantial computational efficiency. Experiments demonstrate significant improvements over state-of-the-art baselines across video question answering, summarization, and retrieval tasks, yielding a 19.67% performance gain on hour-long videos and reducing processing latency to 45.4% of baseline levels.

Technology Category

Application Category

📝 Abstract
Long videos, ranging from minutes to hours, present significant challenges for current Multi-modal Large Language Models (MLLMs) due to their complex events, diverse scenes, and long-range dependencies. Direct encoding of such videos is computationally too expensive, while simple video-to-text conversion often results in redundant or fragmented content. To address these limitations, we introduce MMViR, a novel multi-modal, multi-grained structured representation for long video understanding. MMViR identifies key turning points to segment the video and constructs a three-level description that couples global narratives with fine-grained visual details. This design supports efficient query-based retrieval and generalizes well across various scenarios. Extensive evaluations across three tasks, including QA, summarization, and retrieval, show that MMViR outperforms the prior strongest method, achieving a 19.67% improvement in hour-long video understanding while reducing processing latency to 45.4% of the original.
Problem

Research questions and friction points this paper is trying to address.

long-range video understanding
multi-modal representation
long videos
video-to-text conversion
computational efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-modal representation
multi-granularity
long-range video understanding
structured video segmentation
efficient video retrieval
🔎 Similar Papers
No similar papers found.