Not All Documents Are What You Need for Extracting Instruction Tuning Data

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address critical challenges—including scarcity of instruction-tuning data, low diversity in synthetic data, high human annotation costs, and poor relevance of question-answer (QA) pairs extracted from web corpora—this paper proposes EQUAL, an iterative document clustering and multi-armed bandit–driven framework for high-quality QA pair extraction. EQUAL integrates contrastive learning–based embedding clustering, LLM-assisted structured QA extraction, joint iterative optimization of documents and QA pairs, and an adaptive sampling mechanism, thereby ensuring task relevance while substantially improving data quality and diversity. Evaluated on AutoMathText and StackOverflow, EQUAL reduces computational cost by 5–10× and boosts average accuracy by 2.5% on LLaMA-3.1-8B and Mistral-7B—achieving, for the first time, a unified balance among high relevance, high diversity, and high scalability.

Technology Category

Application Category

📝 Abstract
Instruction tuning improves the performance of large language models (LLMs), but it heavily relies on high-quality training data. Recently, LLMs have been used to synthesize instruction data using seed question-answer (QA) pairs. However, these synthesized instructions often lack diversity and tend to be similar to the input seeds, limiting their applicability in real-world scenarios. To address this, we propose extracting instruction tuning data from web corpora that contain rich and diverse knowledge. A naive solution is to retrieve domain-specific documents and extract all QA pairs from them, but this faces two key challenges: (1) extracting all QA pairs using LLMs is prohibitively expensive, and (2) many extracted QA pairs may be irrelevant to the downstream tasks, potentially degrading model performance. To tackle these issues, we introduce EQUAL, an effective and scalable data extraction framework that iteratively alternates between document selection and high-quality QA pair extraction to enhance instruction tuning. EQUAL first clusters the document corpus based on embeddings derived from contrastive learning, then uses a multi-armed bandit strategy to efficiently identify clusters that are likely to contain valuable QA pairs. This iterative approach significantly reduces computational cost while boosting model performance. Experiments on AutoMathText and StackOverflow across four downstream tasks show that EQUAL reduces computational costs by 5-10x and improves accuracy by 2.5 percent on LLaMA-3.1-8B and Mistral-7B
Problem

Research questions and friction points this paper is trying to address.

High-quality instruction tuning data lacks diversity and relevance
Extracting all QA pairs from documents is computationally expensive
Irrelevant QA pairs degrade model performance in downstream tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses contrastive learning for document clustering
Applies multi-armed bandit for cluster selection
Iteratively extracts high-quality QA pairs efficiently
🔎 Similar Papers
C
Chi Zhang
Department of Computer Science, Beijing Institute of Technology, Beijing, China
Huaping Zhong
Huaping Zhong
SenseTime Group Limited
H
Hongtao Li
Sensetime Research, Shenzhen, China
Chengliang Chai
Chengliang Chai
Beijing Institute of Technology
Data cleaning and integration
J
Jiawei Hong
Sensetime Research, Shenzhen, China
Y
Yuhao Deng
Department of Computer Science, Beijing Institute of Technology, Beijing, China
Jiacheng Wang
Jiacheng Wang
Nanyang Technological University
ISACGenAILow-altitude wireless networkSemantic Communications
T
Tian Tan
University of Arizona, USA
Y
Yizhou Yan
Meta, USA
Jiantao Qiu
Jiantao Qiu
EE department of Tsinghua University
Y
Ye Yuan
Department of Computer Science, Beijing Institute of Technology, Beijing, China
Guoren Wang
Guoren Wang
Beijing Institute of Technology
Conghui He
Conghui He
Shanghai AI Laboratory
Data-centric AILLMDocument Intelligence
Lei Cao
Lei Cao
Assistant Professor, University of Arizona/Research Scientist, MIT CSAIL
DatabasesMachine learning