VideoChain: A Transformer-Based Framework for Multi-hop Video Question Generation

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work introduces the novel Multi-hop Video Question Generation (MVQG) task, addressing complex temporal reasoning across temporally disjoint video segments. To this end, we propose VideoChain, the first video question generation framework explicitly designed for multi-hop reasoning. VideoChain adopts a modular architecture that integrates an enhanced BART model with multi-fragment video embeddings, and leverages TVQA+ to automatically construct high-quality multi-hop question-answer pairs. We release MVQ-60, a large-scale, diverse dataset specifically curated to increase task difficulty and semantic complexity. Extensive experiments demonstrate that VideoChain achieves state-of-the-art performance: ROUGE-L (0.6454), BERTScore-F1 (0.7967), and semantic similarity (0.8110), confirming its capability to generate coherent, logically deep, and video-grounded questions.

Technology Category

Application Category

📝 Abstract
Multi-hop Question Generation (QG) effectively evaluates reasoning but remains confined to text; Video Question Generation (VideoQG) is limited to zero-hop questions over single segments. To address this, we introduce VideoChain, a novel Multi-hop Video Question Generation (MVQG) framework designed to generate questions that require reasoning across multiple, temporally separated video segments. VideoChain features a modular architecture built on a modified BART backbone enhanced with video embeddings, capturing textual and visual dependencies. Using the TVQA+ dataset, we automatically construct the large-scale MVQ-60 dataset by merging zero-hop QA pairs, ensuring scalability and diversity. Evaluations show VideoChain's strong performance across standard generation metrics: ROUGE-L (0.6454), ROUGE-1 (0.6854), BLEU-1 (0.6711), BERTScore-F1 (0.7967), and semantic similarity (0.8110). These results highlight the model's ability to generate coherent, contextually grounded, and reasoning-intensive questions.
Problem

Research questions and friction points this paper is trying to address.

Generating multi-hop reasoning questions across video segments
Overcoming limitations of single-segment video question generation
Creating scalable diverse datasets for video reasoning evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-based framework for multi-hop video question generation
Modular architecture with enhanced BART backbone and video embeddings
Automatically constructs large-scale dataset from merged QA pairs
🔎 Similar Papers
2024-02-20International Conference on Machine LearningCitations: 30