🤖 AI Summary
This work addresses the challenge of adapting large language models to specialized domains, which is often constrained by the scarcity of high-quality, low-cost domain-specific instruction-tuning data. The authors propose a zero-shot instruction synthesis framework that, for the first time, integrates Bloom’s cognitive taxonomy with task-aware keywords to automatically generate diverse, multi-domain instructions. To ensure the professionalism and reliability of the synthesized data, the framework incorporates a self-consistency verification mechanism. Notably, the approach requires no human annotation and successfully produces high-quality instruction data across seven specialized domains. Models fine-tuned with this synthetic data significantly outperform those trained using existing data synthesis methods.
📝 Abstract
Adapting Large Language Models (LLMs) to specialized domains requires high-quality instruction tuning datasets, which are expensive to create through human annotation. Existing data synthesis methods focus on general-purpose tasks and fail to capture domain-specific terminology and reasoning patterns. To address this, we introduce DS$^2$-Instruct, a zero-shot framework that generates domain-specific instruction datasets without human supervision. Our approach first generates task-informed keywords to ensure comprehensive domain coverage. It then creates diverse instructions by pairing these keywords with different cognitive levels from Bloom's Taxonomy. Finally, it uses self-consistency validation to ensure data quality. We apply this framework to generate datasets across seven challenging domains, such as mathematics, finance, and logical reasoning. Comprehensive evaluation demonstrates that models fine-tuned on our generated data achieve substantial improvements over existing data generation methods.