🤖 AI Summary
Long-form video understanding (LVU) faces challenges in efficiently localizing sparse and distributed spatiotemporal cues. Existing agent-based approaches rely on query-agnostic caption generation, leading to redundant computation and loss of fine-grained spatiotemporal information. This paper introduces the first active video perception framework tailored for LVU, integrating active perception theory into the domain via a query-driven “plan–observe–reflect” iterative mechanism. An MLLM agent dynamically selects spatiotemporal observation locations directly on pixel-level video frames, precisely extracting and incrementally accumulating timestamped, compact evidence. Compared to the strongest prior agent method, our approach achieves an average accuracy improvement of 5.7% across five LVU benchmarks, while reducing inference time to 18.4% and input tokens to 12.4%, thereby significantly enhancing both efficiency and fine-grained temporal reasoning capability.
📝 Abstract
Long video understanding (LVU) is challenging because answering real-world queries often depends on sparse, temporally dispersed cues buried in hours of mostly redundant and irrelevant content. While agentic pipelines improve video reasoning capabilities, prevailing frameworks rely on a query-agnostic captioner to perceive video information, which wastes computation on irrelevant content and blurs fine-grained temporal and spatial information. Motivated by active perception theory, we argue that LVU agents should actively decide what, when, and where to observe, and continuously assess whether the current observation is sufficient to answer the query. We present Active Video Perception (AVP), an evidence-seeking framework that treats the video as an interactive environment and acquires compact, queryrelevant evidence directly from pixels. Concretely, AVP runs an iterative plan-observe-reflect process with MLLM agents. In each round, a planner proposes targeted video interactions, an observer executes them to extract time-stamped evidence, and a reflector evaluates the sufficiency of the evidence for the query, either halting with an answer or triggering further observation. Across five LVU benchmarks, AVP achieves highest performance with significant improvements. Notably, AVP outperforms the best agentic method by 5.7% in average accuracy while only requires 18.4% inference time and 12.4% input tokens.