Conversational Lexicography: Querying Lexicographic Data on Knowledge Graphs with SPARQL through Natural Language

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of enabling non-expert users to access lexical-semantic data in knowledge graphs (e.g., Wikidata) via natural language. To this end, we propose a four-dimensional ontology-based complexity taxonomy for dictionary semantics and construct over 1.2 million high-quality, template-based NL-to-SPARQL training instances. Leveraging SPARQL syntactic constraints, Wikidata’s ontology structure, and diversity-oriented pretraining strategies, we systematically evaluate the zero-shot generalization performance of GPT-2, Phi-1.5, and GPT-3.5-Turbo on NL2SPARQL translation. Results demonstrate that large language models—particularly GPT-3.5-Turbo—significantly outperform smaller models on unseen query patterns, confirming the critical role of model scale and pretraining diversity in cross-pattern generalization. However, robust fine-grained semantic understanding and comprehensive coverage of lexical ontologies remain open challenges.

Technology Category

Application Category

📝 Abstract
Knowledge graphs offer an excellent solution for representing the lexical-semantic structures of lexicographic data. However, working with the SPARQL query language represents a considerable hurdle for many non-expert users who could benefit from the advantages of this technology. This paper addresses the challenge of creating natural language interfaces for lexicographic data retrieval on knowledge graphs such as Wikidata. We develop a multidimensional taxonomy capturing the complexity of Wikidata's lexicographic data ontology module through four dimensions and create a template-based dataset with over 1.2 million mappings from natural language utterances to SPARQL queries. Our experiments with GPT-2 (124M), Phi-1.5 (1.3B), and GPT-3.5-Turbo reveal significant differences in model capabilities. While all models perform well on familiar patterns, only GPT-3.5-Turbo demonstrates meaningful generalization capabilities, suggesting that model size and diverse pre-training are crucial for adaptability in this domain. However, significant challenges remain in achieving robust generalization, handling diverse linguistic data, and developing scalable solutions that can accommodate the full complexity of lexicographic knowledge representation.
Problem

Research questions and friction points this paper is trying to address.

Enabling non-experts to query lexicographic data via natural language
Mapping natural language to SPARQL for Wikidata lexicographic queries
Assessing model generalization for lexicographic knowledge graph interfaces
Innovation

Methods, ideas, or system contributions that make the work stand out.

Natural language interface for Wikidata queries
Template-based dataset for SPARQL mappings
GPT-3.5-Turbo for lexicographic generalization
🔎 Similar Papers
No similar papers found.