🤖 AI Summary
This work addresses the challenge of efficiently understanding long-form videos with complex events, diverse scenes, and long-range dependencies using multimodal large language models. Direct encoding incurs prohibitive computational costs, while naive video-to-text conversion often results in redundant or fragmented information. To overcome these limitations, the authors propose a multimodal, multi-granular structured representation that segments videos at key turning points and constructs a three-tiered descriptive hierarchy encompassing global narrative, event-level summaries, and fine-grained visual details. This approach uniquely integrates critical event segmentation with hierarchical semantic representation, achieving both semantic completeness and substantial computational efficiency. Experiments demonstrate significant improvements over state-of-the-art baselines across video question answering, summarization, and retrieval tasks, yielding a 19.67% performance gain on hour-long videos and reducing processing latency to 45.4% of baseline levels.
📝 Abstract
Long videos, ranging from minutes to hours, present significant challenges for current Multi-modal Large Language Models (MLLMs) due to their complex events, diverse scenes, and long-range dependencies. Direct encoding of such videos is computationally too expensive, while simple video-to-text conversion often results in redundant or fragmented content. To address these limitations, we introduce MMViR, a novel multi-modal, multi-grained structured representation for long video understanding. MMViR identifies key turning points to segment the video and constructs a three-level description that couples global narratives with fine-grained visual details. This design supports efficient query-based retrieval and generalizes well across various scenarios. Extensive evaluations across three tasks, including QA, summarization, and retrieval, show that MMViR outperforms the prior strongest method, achieving a 19.67% improvement in hour-long video understanding while reducing processing latency to 45.4% of the original.