🤖 AI Summary
To address the high data dependency and low knowledge injection efficiency in domain adaptation of large language models (LLMs), this paper proposes StructTuning—a structure-aware two-stage fine-tuning framework comprising Structure-Constrained Pretraining (SCPT) and Structure-Supervised Fine-Tuning (SSFT). Inspired by pedagogical principles, StructTuning automatically constructs a domain-specific knowledge taxonomy and leverages knowledge graphs to guide corpus reorganization and structured prompt generation, enabling explicit modeling of hierarchical knowledge structures. With only 5% labeled data, StructTuning achieves state-of-the-art performance on LongBench and MMedBench, fully recovering (100%) the performance of fully supervised baselines—outperforming all existing knowledge injection methods. Moreover, it demonstrates strong cross-architecture and cross-scale generalizability, maintaining effectiveness across diverse LLM families and parameter counts.
📝 Abstract
This paper introduces a pioneering methodology, termed StructTuning, to efficiently transform foundation Large Language Models (LLMs) into domain specialists. It significantly reduces the training corpus needs to a mere 5% while achieving an impressive 100% of traditional knowledge injection performance. Motivated by structured human education, we propose a novel two-stage strategy for knowledge injection and alignment: Structure-aware Continual Pre-Training (SCPT) and Structure-aware Supervised Fine-Tuning (SSFT). In the SCPT phase, we automatically extract the domain knowledge taxonomy and reorganize the training corpora, enabling LLMs to effectively link textual segments to targeted knowledge points within the taxonomy. In the SSFT phase, we explicitly prompt models to elucidate the underlying knowledge structure in their outputs, leveraging the structured domain insight to address practical problems. Our ultimate method was extensively evaluated across model architectures and scales on LongBench and MMedBench datasets, demonstrating superior performance against other knowledge injection methods. We also explored our method's scalability across different training corpus sizes, laying the foundation to enhance domain-specific LLMs with better data utilization.