Knowledge Prompting: How Knowledge Engineers Use Large Language Models

📅 2024-08-02
🏛️ arXiv.org
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
Knowledge engineering (KE) faces significant challenges in constructing large-scale, dynamic, multilingual, and multimodal knowledge graphs (KGs). Method: This study employs a hackathon-style mixed-methods approach—including interviews, ethnographic observation, artifact analysis, and empirical LLM experiments—to investigate how large language models (LLMs) can serve as effective collaborative assistants for knowledge engineers. Contribution/Results: We identify prompt engineering as a critical yet underappreciated core competency in KE practice. We introduce “KG Cards,” the first responsible AI framework specifically designed for KG construction, addressing ethical implementation gaps. Empirical results demonstrate that LLMs substantially improve KG construction efficiency; however, new bottlenecks emerge in trustworthiness assessment, cross-lingual alignment, and accountability governance. Collectively, this work provides empirically grounded guidelines and methodological foundations for human-AI collaboration in knowledge engineering.

Technology Category

Application Category

📝 Abstract
Despite many advances in knowledge engineering (KE), challenges remain in areas such as engineering knowledge graphs (KGs) at scale, keeping up with evolving domain knowledge, multilingualism, and multimodality. Recently, KE has used LLMs to support semi-automatic tasks, but the most effective use of LLMs to support knowledge engineers across the KE activites is still in its infancy. To explore the vision of LLM copilots for KE and change existing KE practices, we conducted a multimethod study during a KE hackathon. We investigated participants' views on the use of LLMs, the challenges they face, the skills they may need to integrate LLMs into their practices, and how they use LLMs responsibly. We found participants felt LLMs could contribute to improving efficiency when engineering KGs, but presented increased challenges around the already complex issues of evaluating the KE tasks. We discovered prompting to be a useful but undervalued skill for knowledge engineers working with LLMs, and note that natural language processing skills may become more relevant across more roles in KG construction. Integrating LLMs into KE tasks needs to be mindful of potential risks and harms related to responsible AI. Given the limited ethical training, most knowledge engineers receive solutions such as our suggested `KG cards' based on data cards could be a useful guide for KG construction. Our findings can support designers of KE AI copilots, KE researchers, and practitioners using advanced AI to develop trustworthy applications, propose new methodologies for KE and operate new technologies responsibly.
Problem

Research questions and friction points this paper is trying to address.

Addressing challenges in scaling knowledge graph engineering with LLMs
Investigating effective prompting techniques for knowledge engineering tasks
Ensuring responsible AI integration in knowledge engineering practices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge engineers use LLMs as copilots for KG construction
Prompting is identified as an undervalued skill for engineers
KG cards are proposed to guide responsible AI integration
E
Elisavet Koutsiana
King’s College London, United Kingdom
J
Johanna Walker
King’s College London, United Kingdom
Michelle Nwachukwu
Michelle Nwachukwu
King’s College London, United Kingdom
Albert Meroño-Peñuela
Albert Meroño-Peñuela
Associate Professor (Senior Lecturer) in Computer Science, King's College London
knowledge graphsartificial intelligencemultimodalitydigital humanities
E
E. Simperl
King’s College London, United Kingdom