From Query to Logic: Ontology-Driven Multi-Hop Reasoning in LLMs

๐Ÿ“… 2025-08-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large language models (LLMs) struggle with modeling nonlinear, structured reasoning in multi-hop question answering (MQA). Method: This paper proposes ORACLE, a novel framework that dynamically constructs question-oriented knowledge ontologies and automatically compiles them into first-order logic (FOL) reasoning chainsโ€”thereby integrating the structural expressiveness of knowledge graphs with the semantic understanding capability of LLMs. ORACLE employs an LLM-driven pipeline for ontology construction, logical formalization, and subproblem decomposition in concert. Contribution/Results: The framework significantly enhances reasoning logicality and interpretability. On multiple standard MQA benchmarks, ORACLE achieves accuracy competitive with state-of-the-art models such as DeepSeek-R1, while generating reasoning paths that exhibit superior consistency and verifiability.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Models (LLMs), despite their success in question answering, exhibit limitations in complex multi-hop question answering (MQA) tasks that necessitate non-linear, structured reasoning. This limitation stems from their inability to adequately capture deep conceptual relationships between entities. To overcome this challenge, we present **ORACLE** (**O**ntology-driven **R**easoning **A**nd **C**hain for **L**ogical **E**ucidation), a training-free framework that combines LLMs' generative capabilities with the structural benefits of knowledge graphs. Our approach operates through three stages: (1) dynamic construction of question-specific knowledge ontologies using LLMs, (2) transformation of these ontologies into First-Order Logic reasoning chains, and (3) systematic decomposition of the original query into logically coherent sub-questions. Experimental results on several standard MQA benchmarks show that our framework achieves highly competitive performance, rivaling current state-of-the-art models like DeepSeek-R1. Detailed analyses further confirm the effectiveness of each component, while demonstrating that our method generates more logical and interpretable reasoning chains than existing approaches.
Problem

Research questions and friction points this paper is trying to address.

Enhance multi-hop question answering in LLMs
Capture deep conceptual entity relationships
Generate logical and interpretable reasoning chains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic construction of question-specific knowledge ontologies
Transformation of ontologies into First-Order Logic chains
Systematic decomposition into logically coherent sub-questions
๐Ÿ”Ž Similar Papers
No similar papers found.
H
Haonan Bian
School of Cyber Security, Xidian University
Yutao Qi
Yutao Qi
Associate Professor of Computer Science and Technology, Xidian Universtiy
Evolutionary ComputationMachine LearningMulti-objective Optimization
R
Rui Yang
School of Cyber Security, Xidian University
Y
Yuanxi Che
School of Cyber Security, Xidian University
J
Jiaqian Wang
School of Cyber Security, Xidian University
Heming Xia
Heming Xia
Natural Language Processing Group, The Hong Kong Polytechnic University
Natural Language ProcessingLarge Language Models
R
Ranran Zhen
School of Future Science and Engineering, Soochow University