What are the Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets? Insights and Best Practices

πŸ“… 2024-09-03
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 4
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing long-context multi-hop instruction data synthesis suffers from low qualityβ€”fewer than 35% of instances are genuinely multi-hop, while over 40% are low-quality. Method: This paper proposes the Multi-Agent Interactive Multi-Hop Generation (MIMG) framework, featuring a novel four-role collaborative architecture: Quality Verifier, Single-Hop Generator, Multi-Question Sampler, and Multi-Hop Merger. MIMG integrates an enhanced Self-Instruct paradigm, dynamic question sampling, semantic fusion, and a cross-model robustness validation mechanism to ensure data reliability. Contribution/Results: It is the first work to systematically identify key factors affecting quality in long-context synthetic data. MIMG elevates the proportion of high-quality multi-hop instances to over 85%. Experiments demonstrate that models fine-tuned on MIMG-synthesized data significantly outperform baselines trained on larger-scale human-annotated datasets in long-context multi-hop reasoning tasks.

Technology Category

Application Category

πŸ“ Abstract
Recent advancements in large language models (LLMs) with extended context windows have significantly improved tasks such as information extraction, question answering, and complex planning scenarios. In order to achieve success in long context tasks, a large amount of work has been done to enhance the long context capabilities of the model through synthetic data. Existing methods typically utilize the Self-Instruct framework to generate instruction tuning data for better long context capability improvement. However, our preliminary experiments indicate that less than 35% of generated samples are multi-hop, and more than 40% exhibit poor quality, limiting comprehensive understanding and further research. To improve the quality of synthetic data, we propose the Multi-agent Interactive Multi-hop Generation (MIMG) framework, incorporating a Quality Verification Agent, a Single-hop Question Generation Agent, a Multiple Question Sampling Strategy, and a Multi-hop Question Merger Agent. This framework improves the data quality, with the proportion of high-quality, multi-hop, and diverse data exceeding 85%. Furthermore, we systematically investigate strategies for document selection, question merging, and validation techniques through extensive experiments across various models. Our findings show that our synthetic high-quality long-context instruction data significantly enhances model performance, even surpassing models trained on larger amounts of human-annotated data. Our code is available at: https://github.com/WowCZ/LongMIT.
Problem

Research questions and friction points this paper is trying to address.

Identifying key factors for effective long-context multi-hop instruction datasets
Improving synthetic data quality for better long-context model performance
Developing a multi-agent framework to generate high-quality multi-hop questions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent Interactive Multi-hop Generation framework
Quality Verification Agent ensures data quality
Multiple Question Sampling Strategy enhances diversity
πŸ”Ž Similar Papers
No similar papers found.