🤖 AI Summary
This study addresses the high cost and low efficiency of traditional ontology construction in specialized domains such as casting, which relies heavily on manual annotation and conventional NLP techniques. It presents the first systematic comparison of three few-shot information extraction strategies based on large language models (LLMs)—pretrained model prompting, in-context learning (ICL), and fine-tuning—for automatically extracting domain-specific terms and relations to build ontologies. Through expert validation, the research identifies the most effective LLM-based strategy and successfully constructs a high-quality ontology for the casting domain. The proposed approach significantly improves construction efficiency while maintaining high accuracy, offering a robust and scalable paradigm for automated knowledge modeling in specialized fields.
📝 Abstract
Ontologies are essential for structuring domain knowledge, improving accessibility, sharing, and reuse. However, traditional ontology construction relies on manual annotation and conventional natural language processing (NLP) techniques, making the process labour-intensive and costly, especially in specialised fields like casting manufacturing. The rise of Large Language Models (LLMs) offers new possibilities for automating knowledge extraction. This study investigates three LLM-based approaches, including pre-trained LLM-driven method, in-context learning (ICL) method and fine-tuning method to extract terms and relations from domain-specific texts using limited data. We compare their performances and use the best-performing method to build a casting ontology that validated by domian expert.