OntoURL: A Benchmark for Evaluating Large Language Models on Symbolic Ontological Understanding, Reasoning and Learning

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical bottleneck in large language models’ (LLMs) ability to process formal symbolic knowledge—particularly ontologies—by systematically evaluating their performance across three dimensions: concept understanding, logical reasoning, and symbolic knowledge acquisition. To this end, we introduce OntoURL, the first comprehensive, ontology-specific benchmark, grounded in an original three-dimensional taxonomy of ontological capabilities. OntoURL encompasses 40 OWL/RDF ontologies spanning eight domains and 58,981 rigorously curated, human-verified, and algorithmically generated structured questions. Empirical evaluation across 20 mainstream open-source LLMs reveals that while models exhibit baseline competence in concept understanding, they suffer from substantial deficiencies in logical inference and formal symbolic learning. OntoURL thus provides a reproducible, fine-grained, and formally grounded evaluation framework to bridge symbolic AI and foundation models.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated remarkable capabilities across a range of natural language processing tasks, yet their ability to process structured symbolic knowledge remains underexplored. To address this gap, we propose a taxonomy of LLMs' ontological capabilities and introduce OntoURL, the first comprehensive benchmark designed to systematically evaluate LLMs' proficiency in handling ontologies -- formal, symbolic representations of domain knowledge through concepts, relationships, and instances. Based on the proposed taxonomy, OntoURL systematically assesses three dimensions: understanding, reasoning, and learning through 15 distinct tasks comprising 58,981 questions derived from 40 ontologies across 8 domains. Experiments with 20 open-source LLMs reveal significant performance differences across models, tasks, and domains, with current LLMs showing proficiency in understanding ontological knowledge but substantial weaknesses in reasoning and learning tasks. These findings highlight fundamental limitations in LLMs' capability to process symbolic knowledge and establish OntoURL as a critical benchmark for advancing the integration of LLMs with formal knowledge representations.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs on symbolic ontological understanding and reasoning
Assessing LLMs' proficiency in handling formal domain knowledge representations
Identifying limitations in LLMs' symbolic knowledge processing capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces OntoURL benchmark for LLM evaluation
Assesses understanding, reasoning, learning via tasks
Reveals LLM weaknesses in symbolic knowledge processing
🔎 Similar Papers
No similar papers found.