🤖 AI Summary
To address the challenges of keyframe localization in hour-long videos—namely, susceptibility to redundant information and difficulty modeling complex spatiotemporal hierarchies—this paper proposes HierKey. First, it constructs an adaptive hierarchical tree structure that enables progressive parsing from “video → event → keyframe” while preserving temporal coherence. Second, it introduces T-GRPO, a tree-guided reinforcement learning algorithm that pioneers the use of tree-structured policies for keyframe localization; integrating question-driven spatiotemporal modeling and a novel “tree auxin” mechanism to dynamically regulate expansion depth, thereby eliciting interpretable, self-generated reasoning chains. HierKey synergistically combines multimodal large language models, hierarchical clustering, and iterative caption enhancement. It achieves state-of-the-art performance across multiple long-video understanding benchmarks, significantly improving both localization accuracy and inference efficiency. The code is publicly available.
📝 Abstract
Understanding hour-long videos with multi-modal large language models (MM-LLMs) enriches the landscape of human-centered AI applications. However, for end-to-end video understanding with LLMs, uniformly sampling video frames results in LLMs being overwhelmed by a vast amount of irrelevant information as video length increases. Existing hierarchical key frame extraction methods improve the accuracy of video understanding but still face two critical challenges. 1) How can the interference of extensive redundant information in long videos be mitigated? 2) How can a model dynamically adapt to complex hierarchical structures while accurately identifying key frames? To address these issues, we propose VideoMiner, which iteratively segments, captions, and clusters long videos, forming a hierarchical tree structure. The proposed VideoMiner progresses from long videos to events to frames while preserving temporal coherence, effectively addressing the first challenge. To precisely locate key frames, we introduce T-GRPO, a tree-based group relative policy optimization in reinforcement learning method that guides the exploration of the VideoMiner. The proposed T-GRPO is specifically designed for tree structures, integrating spatiotemporal information at the event level while being guided by the question, thus solving the second challenge. We achieve superior performance in all long-video understanding tasks and uncover several interesting insights. Our proposed T-GRPO surprisingly incentivizes the model to spontaneously generate a reasoning chain. Additionally, the designed tree growth auxin dynamically adjusts the expansion depth, obtaining accuracy and efficiency gains. The code is publicly available at https://github.com/caoxinye/VideoMiner.