🤖 AI Summary
Current VideoLLMs exhibit limitations in fine-grained object-level video understanding and chain-of-thought (CoT) reasoning, primarily due to the absence of structured intermediate supervision. To address this, we propose CoTasks—a novel framework that introduces CoT into video understanding for the first time. CoTasks decomposes video reasoning into interpretable, entity-level subtasks—including frame localization, entity tracking, and spatiotemporal relation extraction—thereby establishing a structured, instruction-tuning–oriented reasoning paradigm. We conduct instruction tuning on benchmarks such as NeXT-QA and STAR and evaluate on LLaVA-Video-7B and Qwen2.5-VL-3B. Results show GPT-4 evaluation scores improve by +3.3 and +17.4 points, respectively. Notably, causal, temporal, and descriptive subtask performance increases by up to +48.1 points, demonstrating substantial gains in compositional spatiotemporal reasoning capability.
📝 Abstract
Despite recent progress in video large language models (VideoLLMs), a key open challenge remains: how to equip models with chain-of-thought (CoT) reasoning abilities grounded in fine-grained object-level video understanding. Existing instruction-tuned models, such as the Qwen and LLaVA series, are trained on high-level video-text pairs, often lacking structured annotations necessary for compositional, step-by-step reasoning. We propose CoTasks: Chain-of-Thought based Video Instruction Tuning Tasks, a new framework that decomposes complex video questions of existing datasets (e.g., NeXT-QA, STAR) into four entity-level foundational tasks: frame localization, entity tracking, spatial and temporal relation extraction. By embedding these intermediate CoT-style reasoning steps into the input, CoTasks enables models to explicitly perform object-centric spatiotemporal reasoning. Experiments on the NeXT-QA benchmark show that CoTasks significantly enhance inference performance: LLaVA-video-7B improves by +3.3 points in average GPT-4 evaluation score, and Qwen2.5-VL-3B gains +17.4, with large boosts in causal (+14.6), temporal (+10.9), and descriptive (+48.1) subcategories. These results demonstrate the effectiveness of CoTasks as a structured CoT-style supervision framework for improving compositional video reasoning.