🤖 AI Summary
This work addresses the limitation of existing video understanding methods, which struggle to dynamically acquire evidence due to a disconnect between reasoning and perception. The authors propose an agent-based framework that endows large language models, for the first time, with the ability to actively plan the temporal scope and sampling density of video observations. By establishing a closed-loop “reason–plan–observe” process, the model performs on-demand, progressive evidence collection. The framework integrates a multimodal toolset built upon vision-language models, enabling wide-range scanning, localized focus, and cross-temporal evidence fusion—all without requiring fine-tuning. Evaluated on long-video benchmarks such as LVBench and Video-MME, the approach achieves accuracy gains exceeding 5%, substantially improving the model’s accuracy, robustness, and interpretability.
📝 Abstract
The dense, temporal nature of video presents a profound challenge for automated analysis. Despite the use of powerful Vision-Language Models, prevailing methods for video understanding are limited by the inherent disconnect between reasoning and perception: they rely on static, pre-processed information and cannot actively seek raw evidence from video as their understanding evolves. To address this, we introduce LensWalk, a flexible agentic framework that empowers a Large Language Model reasoner to control its own visual observation actively. LensWalk establishes a tight reason-plan-observe loop where the agent dynamically specifies, at each step, the temporal scope and sampling density of the video it observes. Using a suite of versatile, Vision-Language Model based tools parameterized by these specifications, the agent can perform broad scans for cues, focus on specific segments for fact extraction, and stitch evidence from multiple moments for holistic verification. This design allows for progressive, on-demand evidence gathering that directly serves the agent's evolving chain of thought. Without requiring any model fine-tuning, LensWalk delivers substantial, plug-and-play performance gains on multiple model recipes, boosting their accuracy by over 5\% on challenging long-video benchmarks like LVBench and Video-MME. Our analysis reveals that enabling an agent to control how it sees is key to unlocking more accurate, robust, and interpretable video reasoning.