🤖 AI Summary
Multimodal multi-hop question answering (MMQA) is hindered by the scarcity of high-quality, multi-step, cross-source training data, limiting models’ complex reasoning capabilities. To address this, we propose the first high-fidelity synthetic dataset construction framework tailored for MMQA: leveraging Wikipedia, it employs a five-stage pipeline encompassing multimodal document extraction, few-shot higher-order question generation, cross-modal alignment verification, and knowledge-distillation-guided synthesis refinement. Our approach pioneers a synergistic generation paradigm integrating rule-based heuristics, large language models (LLMs), and knowledge distillation, embedded with multi-stage quality control to effectively overcome annotation bottlenecks. Experiments demonstrate that, under identical sample size constraints, our method achieves an average 1.9-point improvement in Exact Match (EM) across two established benchmarks—significantly outperforming models trained on human-annotated data.
📝 Abstract
Multimodal multihop question answering is a complex task that requires reasoning over multiple sources of information, such as images and text, to answer questions. While there has been significant progress in visual question answering, the multihop setting remains unexplored due to the lack of high-quality datasets. Current methods focus on single-hop question answering or a single modality, which makes them unsuitable for real-world scenarios such as analyzing multimodal educational materials, summarizing lengthy academic articles, or interpreting scientific studies that combine charts, images, and text. To address this gap, we propose a novel methodology, introducing the first framework for creating a high-quality dataset that enables training models for multimodal multihop question answering. Our approach consists of a 5-stage pipeline that involves acquiring relevant multimodal documents from Wikipedia, synthetically generating high-level questions and answers, and validating them through rigorous criteria to ensure quality data. We evaluate our methodology by training models on our synthesized dataset and testing on two benchmarks, our results demonstrate that, with an equal sample size, models trained on our synthesized data outperform those trained on human-collected data by 1.9 in exact match (EM) on average. We believe our data synthesis method will serve as a strong foundation for training and evaluating multimodal multihop question answering models.