🤖 AI Summary
Existing large language models (LLMs) exhibit limited capability in knowledge extraction and structured generation for Hunan’s historical figures due to data scarcity and insufficient domain-specific cultural knowledge.
Method: We propose an instruction-tuning framework tailored to low-resource regional cultures, featuring a Xiangchuang-culture-pattern-guided instruction template, a fine-grained domain-specific instruction dataset, and parameter-efficient fine-tuning (PEFT) applied to Qwen2.5-7B, Qwen3-8B, DeepSeek-R1-Distill-Qwen-7B, and Llama-3.1-8B-Instruct.
Contribution/Results: A domain-specific evaluation benchmark is established. Experimental results show that Qwen3-8B achieves 89.39 points under a 100-sample, 50-epoch training regime—significantly outperforming baseline models. This work provides a reusable methodology and practical paradigm for lightweight, fine-grained construction of cultural heritage knowledge graphs.
📝 Abstract
Large language models and knowledge graphs offer strong potential for advancing research on historical culture by supporting the extraction, analysis, and interpretation of cultural heritage. Using Hunan's modern historical celebrities shaped by Huxiang culture as a case study, pre-trained large models can help researchers efficiently extract key information, including biographical attributes, life events, and social relationships, from textual sources and construct structured knowledge graphs. However, systematic data resources for Hunan's historical celebrities remain limited, and general-purpose models often underperform in domain knowledge extraction and structured output generation in such low-resource settings. To address these issues, this study proposes a supervised fine-tuning approach for enhancing domain-specific information extraction. First, we design a fine-grained, schema-guided instruction template tailored to the Hunan historical celebrities domain and build an instruction-tuning dataset to mitigate the lack of domain-specific training corpora. Second, we apply parameter-efficient instruction fine-tuning to four publicly available large language models - Qwen2.5-7B, Qwen3-8B, DeepSeek-R1-Distill-Qwen-7B, and Llama-3.1-8B-Instruct - and develop evaluation criteria for assessing their extraction performance. Experimental results show that all models exhibit substantial performance gains after fine-tuning. Among them, Qwen3-8B achieves the strongest results, reaching a score of 89.3866 with 100 samples and 50 training iterations. This study provides new insights into fine-tuning vertical large language models for regional historical and cultural domains and highlights their potential for cost-effective applications in cultural heritage knowledge extraction and knowledge graph construction.