Complex Ontology Matching with Large Language Model Embeddings

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Semantic matching between ontologies and knowledge graphs suffers from insufficient expressivity. To address this, we propose an expressive semantic alignment method leveraging large language model (LLM) embeddings, the first to deeply integrate LLMs throughout the entire matching pipeline. Our approach reconstructs the architecture across three levels: label similarity, subgraph neighborhood matching, and entity-level alignment, while innovatively incorporating ABox-driven relation discovery to enhance semantic expressivity. Specifically, it synergistically combines LLM embeddings (e.g., from ChatGLM and Llama3), subgraph neighborhood encoding, multi-granularity embedding contrast (at token, sentence, and LLM levels), and ABox-guided relation mining. Evaluated on standard benchmarks, our method achieves a 45% improvement in F1 score over state-of-the-art baselines—substantially outperforming conventional word- and sentence-embedding approaches—and empirically validates the pivotal role of LLMs in high-expressivity semantic alignment.

Technology Category

Application Category

📝 Abstract
Ontology, and more broadly, Knowledge Graph Matching is a challenging task in which expressiveness has not been fully addressed. Despite the increasing use of embeddings and language models for this task, approaches for generating expressive correspondences still do not take full advantage of these models, in particular, large language models (LLMs). This paper proposes to integrate LLMs into an approach for generating expressive correspondences based on alignment need and ABox-based relation discovery. The generation of correspondences is performed by matching similar surroundings of instance sub-graphs. The integration of LLMs results in different architectural modifications, including label similarity, sub-graph matching, and entity matching. The performance word embeddings, sentence embeddings, and LLM-based embeddings, was compared. The results demonstrate that integrating LLMs surpasses all other models, enhancing the baseline version of the approach with a 45% increase in F-measure.
Problem

Research questions and friction points this paper is trying to address.

Enhancing ontology matching with LLMs
Generating expressive correspondences efficiently
Improving F-measure by 45% using LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates Large Language Models embeddings
Enhances ontology matching with sub-graphs
Improves F-measure by 45%
🔎 Similar Papers
No similar papers found.
G
Guilherme Sousa
IRIT & Université de Toulouse 2 Jean Jaurès, Toulouse, France
Rinaldo Lima
Rinaldo Lima
Federal Rural University of Pernambuco
Artificial IntelligenceText MiningInformation ExtractionSemantic Web
C
C. Trojahn
IRIT & Université de Toulouse 2 Jean Jaurès, Toulouse, France