🤖 AI Summary
To address the challenges of multi-step reasoning, weak spatiotemporal understanding, and lack of interpretability in Video Question Answering (VideoQA), this paper proposes Agent-of-Thoughts Distillation (AoTD), a novel paradigm. AoTD decomposes complex queries into agent-driven subtasks, explicitly models multimodal intermediate states as traceable reasoning chains, and incorporates an LLM-based self-verification mechanism to ensure the reliability of chain-of-thought (CoT) reasoning. Technically, it integrates a lightweight agent system, a dedicated visual encoder, CoT generation and distillation, LLM self-verification, and instruction fine-tuning. AoTD achieves significant performance gains across multiple multiple-choice and open-ended VideoQA benchmarks—including TVQA, EgoSchema, and VideoChatGPT-QA—while enhancing both reasoning interpretability and spatiotemporal localization accuracy. To our knowledge, it is the first framework enabling verifiable, structured, and spatiotemporally grounded multi-step reasoning in video foundation models.
📝 Abstract
This paper tackles the problem of video question answering (VideoQA), a task that often requires multi-step reasoning and a profound understanding of spatial-temporal dynamics. While large video-language models perform well on benchmarks, they often lack explainability and spatial-temporal grounding. In this paper, we propose Agent-of-Thoughts Distillation (AoTD), a method that enhances models by incorporating automatically generated Chain-of-Thoughts (CoTs) into the instruction-tuning process. Specifically, we leverage an agent-based system to decompose complex questions into sub-tasks, and address them with specialized vision models, the intermediate results are then treated as reasoning chains. We also introduce a verification mechanism using a large language model (LLM) to ensure the reliability of generated CoTs. Extensive experiments demonstrate that AoTD improves the performance on multiple-choice and open-ended benchmarks.