🤖 AI Summary
Complex video reasoning faces bottlenecks in requiring large-scale annotated data and strong visual perception capabilities. Method: This paper proposes a training-free, multi-stage explicit reasoning paradigm enabling System-2–like structured inference for videos. We introduce the first training-free chain-of-thought (CoT) architecture for video understanding, integrating dynamic CoT path routing, hierarchical question decomposition, and frame-level self-consistency verification, alongside a novel taxonomy for video question classification. Contribution/Results: Our approach decouples reasoning from reliance on end-to-end visual perception, shifting instead to interpretable and verifiable symbolic reasoning paths. Evaluated on EgoSchema and VideoEspresso, it achieves absolute improvements of +9.3% and +5.6%, respectively—matching or surpassing state-of-the-art multimodal foundation models including GPT-4V, GPT-4o, and Gemini 1.5 Flash.
📝 Abstract
System2 reasoning is developing rapidly these days with the emergence of Deep- Thinking Models and chain-of-thought technology, which has become a centralized discussion point in the AI community. However, there is a relative gap in the research on complex video reasoning at present. In this work, we propose CoT-Vid, a novel training-free paradigm for the video domain with a multistage complex reasoning design. Distinguishing from existing video LLMs, which rely heavily on perceptual abilities, it achieved surprising performance gain with explicit reasoning mechanism. The paradigm consists of three main components: dynamic inference path routing, problem decoupling strategy, and video self-consistency verification. In addition, we propose a new standard for categorization of video questions. CoT- Vid showed outstanding results on a wide range of benchmarks, and outperforms its base model by 9.3% on Egochema and 5.6% on VideoEspresso, rivalling or even surpassing larger and proprietary models, such as GPT-4V, GPT-4o and Gemini-1.5-flash. Our codebase will be publicly available soon.