Structure-aware Domain Knowledge Injection for Large Language Models

📅 2024-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high data dependency and low knowledge injection efficiency in domain adaptation of large language models (LLMs), this paper proposes StructTuning—a structure-aware two-stage fine-tuning framework comprising Structure-Constrained Pretraining (SCPT) and Structure-Supervised Fine-Tuning (SSFT). Inspired by pedagogical principles, StructTuning automatically constructs a domain-specific knowledge taxonomy and leverages knowledge graphs to guide corpus reorganization and structured prompt generation, enabling explicit modeling of hierarchical knowledge structures. With only 5% labeled data, StructTuning achieves state-of-the-art performance on LongBench and MMedBench, fully recovering (100%) the performance of fully supervised baselines—outperforming all existing knowledge injection methods. Moreover, it demonstrates strong cross-architecture and cross-scale generalizability, maintaining effectiveness across diverse LLM families and parameter counts.

Technology Category

Application Category

📝 Abstract
This paper introduces a pioneering methodology, termed StructTuning, to efficiently transform foundation Large Language Models (LLMs) into domain specialists. It significantly reduces the training corpus needs to a mere 5% while achieving an impressive 100% of traditional knowledge injection performance. Motivated by structured human education, we propose a novel two-stage strategy for knowledge injection and alignment: Structure-aware Continual Pre-Training (SCPT) and Structure-aware Supervised Fine-Tuning (SSFT). In the SCPT phase, we automatically extract the domain knowledge taxonomy and reorganize the training corpora, enabling LLMs to effectively link textual segments to targeted knowledge points within the taxonomy. In the SSFT phase, we explicitly prompt models to elucidate the underlying knowledge structure in their outputs, leveraging the structured domain insight to address practical problems. Our ultimate method was extensively evaluated across model architectures and scales on LongBench and MMedBench datasets, demonstrating superior performance against other knowledge injection methods. We also explored our method's scalability across different training corpus sizes, laying the foundation to enhance domain-specific LLMs with better data utilization.
Problem

Research questions and friction points this paper is trying to address.

Enhance LLMs with domain-specific knowledge efficiently.
Reduce training data needs significantly for LLMs.
Improve knowledge injection performance in LLMs.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structure-aware Continual Pre-Training
Structure-aware Supervised Fine-Tuning
Reduced training corpus needs
🔎 Similar Papers
No similar papers found.
K
Kai Liu
Zhejiang University, Alibaba Cloud
Ze Chen
Ze Chen
Alibaba Group
Comuter Vision
Zhihang Fu
Zhihang Fu
Alibaba Cloud
Computer VisionMachine LearningLLM
R
Rongxin Jiang
Zhejiang University
F
Fan Zhou
Zhejiang University
Y
Yaowu Chen
Zhejiang University
Y
Yue Wu
Alibaba Cloud
J
Jieping Ye
Alibaba Cloud