STEP: Enhancing Video-LLMs' Compositional Reasoning by Spatio-Temporal Graph-guided Self-Training

📅 2024-11-29
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Video-LLMs exhibit limited capability in multi-step spatiotemporal compositional reasoning—e.g., cross-object relational, interactive, and event-level inference—primarily due to the absence of explicit reasoning supervision, high-quality compositional training data, and scalable annotation protocols. To address this, we propose a graph-guided self-training paradigm: (1) constructing Spatiotemporal Scene Graphs (STSGs) to explicitly model entities, relations, and their dynamic evolution in videos; (2) automatically generating multi-step question-answering samples with Chain-of-Thought (CoT) reasoning grounded in STSGs; and (3) fine-tuning Video-LLMs via dual-objective supervision over both final answers and intermediate reasoning steps. This approach eliminates reliance on manual annotation, overcoming key bottlenecks in reasoning supervision and data curation. Evaluated on compositional reasoning tasks requiring ≥3 reasoning steps, our method achieves a 21.3% performance gain and attains state-of-the-art results on multiple benchmarks using only a small set of self-generated samples, demonstrating strong generalization.

Technology Category

Application Category

📝 Abstract
Video Large Language Models (Video-LLMs) have recently shown strong performance in basic video understanding tasks, such as captioning and coarse-grained question answering, but struggle with compositional reasoning that requires multi-step spatio-temporal inference across object relations, interactions, and events. The hurdles to enhancing this capability include extensive manual labor, the lack of spatio-temporal compositionality in existing data and the absence of explicit reasoning supervision. In this paper, we propose STEP, a novel graph-guided self-training method that enables Video-LLMs to generate reasoning-rich fine-tuning data from any raw videos to improve itself. Specifically, we first induce Spatio-Temporal Scene Graph (STSG) representation of diverse videos to capture fine-grained, multi-granular video semantics. Then, the STSGs guide the derivation of multi-step reasoning Question-Answer (QA) data with Chain-of-Thought (CoT) rationales. Both answers and rationales are integrated as training objective, aiming to enhance model's reasoning abilities by supervision over explicit reasoning steps. Experimental results demonstrate the effectiveness of STEP across models of varying scales, with a significant 21.3% improvement in tasks requiring three or more reasoning steps. Furthermore, it achieves superior performance with a minimal amount of self-generated rationale-enriched training samples in both compositional reasoning and comprehensive understanding benchmarks, highlighting the broad applicability and vast potential.
Problem

Research questions and friction points this paper is trying to address.

Enhancing Video-LLMs' multi-step spatio-temporal compositional reasoning
Addressing lack of spatio-temporal compositionality in existing training data
Providing explicit reasoning supervision for improved model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spatio-Temporal Scene Graph representation
Graph-guided self-training method
Chain-of-Thought QA data generation
🔎 Similar Papers
No similar papers found.