Mosaic-IT: Free Compositional Data Augmentation Improves Instruction Tuning

📅 2024-05-22
📈 Citations: 4
Influential: 1
📄 PDF
🤖 AI Summary
Existing instruction tuning approaches heavily rely on costly human annotations or teacher-model-generated data, suffering from poor scalability and limited diversity. Method: We propose a fully automated, human- and teacher-model-free instruction data augmentation method that randomly composes multiple base instructions and employs meta-instructions to guide the model in generating structured responses, enabling end-to-end supervised fine-tuning. Contribution/Results: This work is the first to integrate instruction composition with meta-instruction–conditioned generation, significantly enhancing large language models’ comprehension of complex, multi-step instructions and their adherence to output formats. Extensive evaluations across multiple benchmarks demonstrate consistent performance gains, an 80% reduction in training cost, and particularly strong improvements on structured-output generation and multi-step reasoning tasks.

Technology Category

Application Category

📝 Abstract
Finetuning large language models with a variety of instruction-response pairs has enhanced their capability to understand and follow instructions. Current instruction tuning primarily relies on teacher models or human intervention to generate and refine the instructions and responses for training, which are costly, non-sustainable, and may lack diversity. In this paper, we introduce Mosaic Instruction Tuning (Mosaic-IT), a human/model-free compositional data augmentation method that can efficiently create rich and diverse augmentations from existing instruction tuning data to enhance the LLMs. Mosaic-IT randomly concatenates multiple instruction data into one and trains the model to produce the corresponding responses with predefined higher-level meta-instructions to strengthen its multi-step instruction-following and format-following skills. Our extensive evaluations demonstrate a superior performance and training efficiency of Mosaic-IT, which achieves consistent performance improvements over various benchmarks and a $80%$ reduction in training costs compared with original instruction tuning. Our codes and data are available at https://github.com/tianyi-lab/Mosaic-IT.
Problem

Research questions and friction points this paper is trying to address.

Reduces cost of instruction tuning for LLMs
Enhances diversity in instruction-response pairs
Improves multi-step instruction-following skills
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-free compositional data synthesis method
Randomly concatenates multiple instruction data
Reduces training costs by 80 percent
🔎 Similar Papers
No similar papers found.