AQuilt: Weaving Logic and Self-Inspection into Low-Cost, High-Relevance Data Synthesis for Specialist LLMs

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Professional-domain large language models are hindered by the scarcity of high-quality annotated data. To address this, we propose AQuilt, a framework that automatically synthesizes instruction-tuning data with high relevance and strong reasoning capability—without human annotation—by leveraging only unlabeled domain-specific corpora through chain-of-thought construction and self-validation mechanisms. Its core innovation is a six-element-driven data synthesis paradigm: answer, question, unlabeled data, verification, logical structure, and task type—enabling customizable generation across domains and tasks. When trained on 703K synthesized samples, the resulting model matches DeepSeek-V3’s performance on downstream benchmarks while reducing training cost to just 17% of DeepSeek-V3’s. Moreover, it achieves significantly improved task relevance and demonstrates superior generalization, offering a compelling trade-off among cost-efficiency, data quality, and cross-task adaptability.

Technology Category

Application Category

📝 Abstract
Despite the impressive performance of large language models (LLMs) in general domains, they often underperform in specialized domains. Existing approaches typically rely on data synthesis methods and yield promising results by using unlabeled data to capture domain-specific features. However, these methods either incur high computational costs or suffer from performance limitations, while also demonstrating insufficient generalization across different tasks. To address these challenges, we propose AQuilt, a framework for constructing instruction-tuning data for any specialized domains from corresponding unlabeled data, including Answer, Question, Unlabeled data, Inspection, Logic, and Task type. By incorporating logic and inspection, we encourage reasoning processes and self-inspection to enhance model performance. Moreover, customizable task instructions enable high-quality data generation for any task. As a result, we construct a dataset of 703k examples to train a powerful data synthesis model. Experiments show that AQuilt is comparable to DeepSeek-V3 while utilizing just 17% of the production cost. Further analysis demonstrates that our generated data exhibits higher relevance to downstream tasks. Source code, models, and scripts are available at https://github.com/Krueske/AQuilt.
Problem

Research questions and friction points this paper is trying to address.

Specialist LLMs underperform in specialized domains
Existing data synthesis methods are costly or limited
AQuilt enhances performance with logic and inspection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines logic and self-inspection for data synthesis
Uses unlabeled data for specialized domain adaptation
Reduces costs while maintaining high task relevance
🔎 Similar Papers
No similar papers found.
Xiaopeng Ke
Xiaopeng Ke
Nanjing University
deep learningadversarial learningmetric learningtrustworthy ai
H
Hexuan Deng
Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China
X
Xuebo Liu
Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China
Jun Rao
Jun Rao
Harbin Institute of Technology (Shenzhen)
LLMsEfficient Post-trainingKnowledge DistillationMultimodal
Zhenxi Song
Zhenxi Song
Unknown affiliation
AI for NeuroscienceBrain-Computer InterfaceEEG/MRI Analysis
J
Jun Yu
School of Intelligence Science and Engineering, Harbin Institute of Technology, Shenzhen, China
M
Min Zhang
Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China