DS$^2$-Instruct: Domain-Specific Data Synthesis for Large Language Models Instruction Tuning

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of adapting large language models to specialized domains, which is often constrained by the scarcity of high-quality, low-cost domain-specific instruction-tuning data. The authors propose a zero-shot instruction synthesis framework that, for the first time, integrates Bloom’s cognitive taxonomy with task-aware keywords to automatically generate diverse, multi-domain instructions. To ensure the professionalism and reliability of the synthesized data, the framework incorporates a self-consistency verification mechanism. Notably, the approach requires no human annotation and successfully produces high-quality instruction data across seven specialized domains. Models fine-tuned with this synthetic data significantly outperform those trained using existing data synthesis methods.

Technology Category

Application Category

📝 Abstract
Adapting Large Language Models (LLMs) to specialized domains requires high-quality instruction tuning datasets, which are expensive to create through human annotation. Existing data synthesis methods focus on general-purpose tasks and fail to capture domain-specific terminology and reasoning patterns. To address this, we introduce DS$^2$-Instruct, a zero-shot framework that generates domain-specific instruction datasets without human supervision. Our approach first generates task-informed keywords to ensure comprehensive domain coverage. It then creates diverse instructions by pairing these keywords with different cognitive levels from Bloom's Taxonomy. Finally, it uses self-consistency validation to ensure data quality. We apply this framework to generate datasets across seven challenging domains, such as mathematics, finance, and logical reasoning. Comprehensive evaluation demonstrates that models fine-tuned on our generated data achieve substantial improvements over existing data generation methods.
Problem

Research questions and friction points this paper is trying to address.

domain-specific
instruction tuning
data synthesis
large language models
zero-shot
Innovation

Methods, ideas, or system contributions that make the work stand out.

Domain-Specific Data Synthesis
Instruction Tuning
Zero-Shot Generation
Bloom's Taxonomy
Self-Consistency Validation
🔎 Similar Papers
No similar papers found.