🤖 AI Summary
This work introduces the novel Multi-hop Video Question Generation (MVQG) task, addressing complex temporal reasoning across temporally disjoint video segments. To this end, we propose VideoChain, the first video question generation framework explicitly designed for multi-hop reasoning. VideoChain adopts a modular architecture that integrates an enhanced BART model with multi-fragment video embeddings, and leverages TVQA+ to automatically construct high-quality multi-hop question-answer pairs. We release MVQ-60, a large-scale, diverse dataset specifically curated to increase task difficulty and semantic complexity. Extensive experiments demonstrate that VideoChain achieves state-of-the-art performance: ROUGE-L (0.6454), BERTScore-F1 (0.7967), and semantic similarity (0.8110), confirming its capability to generate coherent, logically deep, and video-grounded questions.
📝 Abstract
Multi-hop Question Generation (QG) effectively evaluates reasoning but remains confined to text; Video Question Generation (VideoQG) is limited to zero-hop questions over single segments. To address this, we introduce VideoChain, a novel Multi-hop Video Question Generation (MVQG) framework designed to generate questions that require reasoning across multiple, temporally separated video segments. VideoChain features a modular architecture built on a modified BART backbone enhanced with video embeddings, capturing textual and visual dependencies. Using the TVQA+ dataset, we automatically construct the large-scale MVQ-60 dataset by merging zero-hop QA pairs, ensuring scalability and diversity. Evaluations show VideoChain's strong performance across standard generation metrics: ROUGE-L (0.6454), ROUGE-1 (0.6854), BLEU-1 (0.6711), BERTScore-F1 (0.7967), and semantic similarity (0.8110). These results highlight the model's ability to generate coherent, contextually grounded, and reasoning-intensive questions.