Efficiently Building a Domain-Specific Large Language Model from Scratch: A Case Study of a Classical Chinese Large Language Model

📅 2025-05-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the domain-knowledge deficiency and limited fine-tuning efficacy of general-purpose large language models (LLMs) on classical Chinese understanding and generation tasks, this work introduces AI Taiyan—a specialized LLM tailored for classical Chinese. Methodologically, we demonstrate for the first time that a high-performance classical Chinese model can be achieved with only 1.8 billion parameters; we construct a high-quality, in-house classical Chinese corpus and integrate customized tokenization, domain-aware pretraining, and task-oriented multi-stage fine-tuning. Our key contribution is establishing a holistic, ancient-text-adapted paradigm unifying data curation, training methodology, and evaluation protocols. Experiments show that AI Taiyan consistently outperforms general-purpose LLMs (e.g., Qwen, ChatGLM) and traditional approaches across core tasks—including punctuation restoration, allusion identification, semantic explanation, and classical-to-modern translation—with multiple metrics reaching or exceeding human-level performance.

Technology Category

Application Category

📝 Abstract
General-purpose large language models demonstrate notable capabilities in language comprehension and generation, achieving results that are comparable to, or even surpass, human performance in many language information processing tasks. Nevertheless, when general models are applied to some specific domains, e.g., Classical Chinese texts, their effectiveness is often unsatisfactory, and fine-tuning open-source foundational models similarly struggles to adequately incorporate domain-specific knowledge. To address this challenge, this study developed a large language model, AI Taiyan, specifically designed for understanding and generating Classical Chinese. Experiments show that with a reasonable model design, data processing, foundational training, and fine-tuning, satisfactory results can be achieved with only 1.8 billion parameters. In key tasks related to Classical Chinese information processing such as punctuation, identification of allusions, explanation of word meanings, and translation between ancient and modern Chinese, this model exhibits a clear advantage over both general-purpose large models and domain-specific traditional models, achieving levels close to or surpassing human baselines. This research provides a reference for the efficient construction of specialized domain-specific large language models. Furthermore, the paper discusses the application of this model in fields such as the collation of ancient texts, dictionary editing, and language research, combined with case studies.
Problem

Research questions and friction points this paper is trying to address.

Developing a domain-specific LLM for Classical Chinese understanding and generation
Overcoming limitations of general models in specialized text processing
Achieving human-level performance with efficient model design and training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed AI Taiyan for Classical Chinese understanding
Optimized model design with 1.8B parameters
Achieved superior performance in domain-specific tasks
🔎 Similar Papers
No similar papers found.