🤖 AI Summary
High-precision mapping validation in ontology alignment heavily relies on costly human experts, hindering scalability. Method: This paper proposes an LLM-driven intelligent verification framework that selectively focuses on the most uncertain candidate mappings output by alignment systems, leveraging ontology-structure-enhanced prompt templates to guide state-of-the-art LLMs in semantic plausibility assessment. Contribution/Results: The approach establishes a human-in-the-loop verification paradigm. Empirical evaluation on the OAEI benchmark demonstrates that LLMs achieve near-oracle discrimination accuracy on critical uncertain samples—substantially outperforming conventional validation strategies. Crucially, it maintains high precision while drastically reducing human effort, offering an efficient, scalable, and low-cost verification pathway for cross-domain data integration.
📝 Abstract
Ontology alignment plays a crucial role in integrating diverse data sources across domains. There is a large plethora of systems that tackle the ontology alignment problem, yet challenges persist in producing highly quality correspondences among a set of input ontologies. Human-in-the-loop during the alignment process is essential in applications requiring very accurate mappings. User involvement is, however, expensive when dealing with large ontologies. In this paper, we explore the feasibility of using Large Language Models (LLM) as an alternative to the domain expert. The use of the LLM focuses only on the validation of the subset of correspondences where an ontology alignment system is very uncertain. We have conducted an extensive evaluation over several matching tasks of the Ontology Alignment Evaluation Initiative (OAEI), analysing the performance of several state-of-the-art LLMs using different ontology-driven prompt templates. The LLM results are also compared against simulated Oracles with variable error rates.