🤖 AI Summary
Constructing high-quality multi-hop question answering (MHQA) datasets incurs prohibitive manual annotation costs, while existing synthetic approaches rely heavily on human intervention and support only limited reasoning types. Method: This paper proposes the first fully automated, human-guidance-free MHQA question synthesis framework. It identifies cross-document reasoning paths via document complementarity detection, supporting both bridge and comparison reasoning. The framework integrates structured template generation with automatic consistency and answerability verification, and establishes a systematic quality evaluation protocol. Contribution/Results: Experiments demonstrate that synthesized questions match or surpass human-annotated data in realism, reasoning depth, and answer accuracy, while drastically reducing construction cost. The code is publicly released, enabling efficient MHQA dataset construction for low-resource, domain-specific applications.
📝 Abstract
Multi-Hop Question Answering (MHQA) is crucial for evaluating the model's capability to integrate information from diverse sources. However, creating extensive and high-quality MHQA datasets is challenging: (i) manual annotation is expensive, and (ii) current synthesis methods often produce simplistic questions or require extensive manual guidance. This paper introduces HopWeaver, the first automatic framework synthesizing authentic multi-hop questions from unstructured text corpora without human intervention. HopWeaver synthesizes two types of multi-hop questions (bridge and comparison) using an innovative approach that identifies complementary documents across corpora. Its coherent pipeline constructs authentic reasoning paths that integrate information across multiple documents, ensuring synthesized questions necessitate authentic multi-hop reasoning. We further present a comprehensive system for evaluating synthesized multi-hop questions. Empirical evaluations demonstrate that the synthesized questions achieve comparable or superior quality to human-annotated datasets at a lower cost. Our approach is valuable for developing MHQA datasets in specialized domains with scarce annotated resources. The code for HopWeaver is publicly available.