🤖 AI Summary
Ontology matching in zero-shot settings faces challenges of semantic gap and combinatorial explosion in the search space. This paper proposes a module-level, LLM-based direct alignment method: first, domain-informed heuristic rules prune the candidate module-pair search space; second, an automated prompting mechanism guides large language models to generate high-quality, diverse synthetic alignment corpora; finally, the LLM is lightweight fine-tuned on this corpus. The core innovation lies in the tight coupling of search-space pruning and LLM-driven synthetic data generation, forming a closed-loop fine-tuning paradigm. Experiments across multiple benchmark datasets from the OAEI complex track demonstrate that the fine-tuned model significantly outperforms zero-shot baselines, achieving an average 12.7% improvement in Top-1 accuracy—validating both effectiveness and generalizability.
📝 Abstract
Large Language Models (LLMs) are increasingly being integrated into various components of Ontology Matching pipelines. This paper investigates the capability of LLMs to perform ontology matching directly on ontology modules and generate the corresponding alignments. Furthermore, it is explored how a dedicated fine-tuning strategy can enhance the model's matching performance in a zero-shot setting. The proposed method incorporates a search space reduction technique to select relevant subsets from both source and target ontologies, which are then used to automatically construct prompts. Recognizing the scarcity of reference alignments for training, a novel LLM-based approach is introduced for generating a synthetic dataset. This process creates a corpus of ontology submodule pairs and their corresponding reference alignments, specifically designed to fine-tune an LLM for the ontology matching task. The proposed approach was evaluated on the Conference, Geolink, Enslaved, Taxon, and Hydrography datasets from the OAEI complex track. The results demonstrate that the LLM fine-tuned on the synthetically generated data exhibits superior performance compared to the non-fine-tuned base model. The key contribution is a strategy that combines automatic dataset generation with fine-tuning to effectively adapt LLMs for ontology matching tasks.