MDIT: A Model-free Data Interpolation Method for Diverse Instruction Tuning

📅 2025-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of instruction tuning—namely, insufficient training data diversity, high manual construction costs, and reliance on external models—this paper proposes a fully zero-resource, model-agnostic instruction data synthesis method. The approach comprises three core components: (1) a novel task-level instruction interpolation mechanism that generates new instructions via linear combination of task vectors; (2) diversity-aware k-means clustering to ensure broad coverage of the multi-task semantic space; and (3) template composition coupled with semantic consistency filtering to guarantee output quality. Crucially, the method requires no pretrained models, human annotations, or additional computational resources. Evaluated across diverse benchmarks—including general question answering, mathematical reasoning, and code generation—it consistently improves LLM performance, demonstrating strong generalization and plug-and-play applicability.

Technology Category

Application Category

📝 Abstract
As Large Language Models (LLMs) are increasingly applied across various tasks, instruction tuning has emerged as a critical method for enhancing model performance. However, current data management strategies face substantial challenges in generating diverse and comprehensive data, restricting further improvements in model performance. To address this gap, we propose MDIT, a novel model-free data interpolation method for diverse instruction tuning, which generates varied and high-quality instruction data by performing task interpolation. Moreover, it contains diversity-based clustering strategies to ensure the diversity of the training data. Extensive experiments show that our method achieves superior performance in multiple benchmark tasks. The LLMs finetuned with MDIT show significant improvements in numerous tasks such as general question answering, math reasoning, and code generation. MDIT offers an efficient and automatic data synthetic method, generating diverse instruction data without depending on external resources while expanding the application potential of LLMs in complex environments.
Problem

Research questions and friction points this paper is trying to address.

Generating diverse and comprehensive instruction tuning data
Improving model performance without external resources
Enhancing LLMs in complex tasks automatically
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-free data interpolation for diverse instruction tuning
Task interpolation generates varied high-quality instruction data
Diversity-based clustering ensures training data diversity
🔎 Similar Papers
No similar papers found.
Y
Yangning Li
Tsinghua University
Z
Zihua Lan
Tsinghua University
Qingsong Lv
Qingsong Lv
Tsinghua University
Computer ScienceMachine Learning
Y
Yinghui Li
Tsinghua University
H
Hai-Tao Zheng
Tsinghua University