TIR-Flow: Active Video Search and Reasoning with Frozen VLMs

๐Ÿ“… 2026-01-07
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limited active perception and dynamic visual exploration capabilities of existing video-language models (VLMs) in complex reasoning tasks, which often rely heavily on synthetic data or parameter fine-tuning. The authors propose TIR-Flow, a novel framework that, for the first time, integrates an active perception mechanism into frozen VLMsโ€”enhancing their high-order reasoning abilities without updating model parameters or introducing additional training data. TIR-Flow establishes a System-2-like, long-horizon video understanding pipeline through three core components: Hierarchical Task Decomposition (HDD), Active High-resolution Attention-based Perception (HAP), and Evidence-Based Accumulative Reasoning (EBA). Evaluated across seven video reasoning benchmarks, the method achieves an average performance gain of 5.9%, with a notable 10.5% improvement on Egoschema, significantly outperforming current strong baselines.

Technology Category

Application Category

๐Ÿ“ Abstract
While Large Video-Language Models (Video-LLMs) have achieved remarkable progress in perception, their reasoning capabilities remain a bottleneck. Existing solutions typically resort to a heavy"data engineering"paradigm-synthesizing large-scale Chain-of-Thought (CoT) datasets followed by Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL). This pipeline primarily optimizes probability sampling efficiency and aligns output distributions, but fails to activate the intrinsic intelligence required for dynamic visual exploration. In this work, we propose TIR-Flow, a novel framework that shifts the paradigm from passive processing to active video searching and reasoning without additional data or parameter updating. Concretely, our framework operates through three synergistic modules: HDD decomposes complex queries into a set of verifiable sub-tasks; HAP actively directs visual attention to gather high-resolution evidence for hypothesis validation; EBA maintains a persistent workspace to accumulate and update the discovered clues for logical reasoning. Extensive experiments on seven benchmarks demonstrate that TIR-Flow significantly outperforms recent strong baselines, delivering an average performance boost of 5.9%, with gains reaching 10.5% on Egoschema. Our analysis confirms that empowering frozen VLMs with System-2-like active perception is a scalable path toward solving long-horizon video reasoning.
Problem

Research questions and friction points this paper is trying to address.

Video-Language Models
reasoning
active perception
visual exploration
long-horizon video reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Active Video Reasoning
Frozen VLMs
Visual Attention Guidance
System-2-like Reasoning
Task Decomposition
๐Ÿ”Ž Similar Papers
No similar papers found.