Large Language Models as Oracles for Ontology Alignment

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-precision mapping validation in ontology alignment heavily relies on costly human experts, hindering scalability. Method: This paper proposes an LLM-driven intelligent verification framework that selectively focuses on the most uncertain candidate mappings output by alignment systems, leveraging ontology-structure-enhanced prompt templates to guide state-of-the-art LLMs in semantic plausibility assessment. Contribution/Results: The approach establishes a human-in-the-loop verification paradigm. Empirical evaluation on the OAEI benchmark demonstrates that LLMs achieve near-oracle discrimination accuracy on critical uncertain samples—substantially outperforming conventional validation strategies. Crucially, it maintains high precision while drastically reducing human effort, offering an efficient, scalable, and low-cost verification pathway for cross-domain data integration.

Technology Category

Application Category

📝 Abstract
Ontology alignment plays a crucial role in integrating diverse data sources across domains. There is a large plethora of systems that tackle the ontology alignment problem, yet challenges persist in producing highly quality correspondences among a set of input ontologies. Human-in-the-loop during the alignment process is essential in applications requiring very accurate mappings. User involvement is, however, expensive when dealing with large ontologies. In this paper, we explore the feasibility of using Large Language Models (LLM) as an alternative to the domain expert. The use of the LLM focuses only on the validation of the subset of correspondences where an ontology alignment system is very uncertain. We have conducted an extensive evaluation over several matching tasks of the Ontology Alignment Evaluation Initiative (OAEI), analysing the performance of several state-of-the-art LLMs using different ontology-driven prompt templates. The LLM results are also compared against simulated Oracles with variable error rates.
Problem

Research questions and friction points this paper is trying to address.

Improving quality of ontology alignment correspondences
Reducing human involvement in large ontology mapping
Validating uncertain alignments using Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using LLMs to validate uncertain ontology alignments
Leveraging ontology-driven prompts for LLM evaluation
Comparing LLM performance against simulated Oracles
🔎 Similar Papers
No similar papers found.