๐ค AI Summary
This work addresses the limited active perception and dynamic visual exploration capabilities of existing video-language models (VLMs) in complex reasoning tasks, which often rely heavily on synthetic data or parameter fine-tuning. The authors propose TIR-Flow, a novel framework that, for the first time, integrates an active perception mechanism into frozen VLMsโenhancing their high-order reasoning abilities without updating model parameters or introducing additional training data. TIR-Flow establishes a System-2-like, long-horizon video understanding pipeline through three core components: Hierarchical Task Decomposition (HDD), Active High-resolution Attention-based Perception (HAP), and Evidence-Based Accumulative Reasoning (EBA). Evaluated across seven video reasoning benchmarks, the method achieves an average performance gain of 5.9%, with a notable 10.5% improvement on Egoschema, significantly outperforming current strong baselines.
๐ Abstract
While Large Video-Language Models (Video-LLMs) have achieved remarkable progress in perception, their reasoning capabilities remain a bottleneck. Existing solutions typically resort to a heavy"data engineering"paradigm-synthesizing large-scale Chain-of-Thought (CoT) datasets followed by Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL). This pipeline primarily optimizes probability sampling efficiency and aligns output distributions, but fails to activate the intrinsic intelligence required for dynamic visual exploration. In this work, we propose TIR-Flow, a novel framework that shifts the paradigm from passive processing to active video searching and reasoning without additional data or parameter updating. Concretely, our framework operates through three synergistic modules: HDD decomposes complex queries into a set of verifiable sub-tasks; HAP actively directs visual attention to gather high-resolution evidence for hypothesis validation; EBA maintains a persistent workspace to accumulate and update the discovered clues for logical reasoning. Extensive experiments on seven benchmarks demonstrate that TIR-Flow significantly outperforms recent strong baselines, delivering an average performance boost of 5.9%, with gains reaching 10.5% on Egoschema. Our analysis confirms that empowering frozen VLMs with System-2-like active perception is a scalable path toward solving long-horizon video reasoning.