🤖 AI Summary
Professional-domain large language models are hindered by the scarcity of high-quality annotated data. To address this, we propose AQuilt, a framework that automatically synthesizes instruction-tuning data with high relevance and strong reasoning capability—without human annotation—by leveraging only unlabeled domain-specific corpora through chain-of-thought construction and self-validation mechanisms. Its core innovation is a six-element-driven data synthesis paradigm: answer, question, unlabeled data, verification, logical structure, and task type—enabling customizable generation across domains and tasks. When trained on 703K synthesized samples, the resulting model matches DeepSeek-V3’s performance on downstream benchmarks while reducing training cost to just 17% of DeepSeek-V3’s. Moreover, it achieves significantly improved task relevance and demonstrates superior generalization, offering a compelling trade-off among cost-efficiency, data quality, and cross-task adaptability.
📝 Abstract
Despite the impressive performance of large language models (LLMs) in general domains, they often underperform in specialized domains. Existing approaches typically rely on data synthesis methods and yield promising results by using unlabeled data to capture domain-specific features. However, these methods either incur high computational costs or suffer from performance limitations, while also demonstrating insufficient generalization across different tasks. To address these challenges, we propose AQuilt, a framework for constructing instruction-tuning data for any specialized domains from corresponding unlabeled data, including Answer, Question, Unlabeled data, Inspection, Logic, and Task type. By incorporating logic and inspection, we encourage reasoning processes and self-inspection to enhance model performance. Moreover, customizable task instructions enable high-quality data generation for any task. As a result, we construct a dataset of 703k examples to train a powerful data synthesis model. Experiments show that AQuilt is comparable to DeepSeek-V3 while utilizing just 17% of the production cost. Further analysis demonstrates that our generated data exhibits higher relevance to downstream tasks. Source code, models, and scripts are available at https://github.com/Krueske/AQuilt.