Ontology-Guided Reverse Thinking Makes Large Language Models Stronger on Knowledge Graph Question Answering

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing forward-entity-matching approaches for Knowledge Graph Question Answering (KGQA) struggle with multi-hop reasoning, as they fail to align abstract question intents with concrete entities—leading to broken reasoning paths, information loss, and redundancy. Method: We propose the Ontology-Guided Reverse-Thinking framework (ORT), introducing a novel “goal → condition” backward-reasoning paradigm that overcomes limitations of conventional forward matching. ORT integrates large language model–based semantic parsing, ontology-structured knowledge graph modeling, and label-level path guidance, operating in three stages to achieve interpretable multi-hop reasoning. Contribution/Results: ORT achieves state-of-the-art performance on WebQSP and ComplexWebQuestions (CWQ), significantly improving both answer accuracy and reasoning interpretability while preserving fidelity to the underlying knowledge graph semantics.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown remarkable capabilities in natural language processing. However, in knowledge graph question answering tasks (KGQA), there remains the issue of answering questions that require multi-hop reasoning. Existing methods rely on entity vector matching, but the purpose of the question is abstract and difficult to match with specific entities. As a result, it is difficult to establish reasoning paths to the purpose, which leads to information loss and redundancy. To address this issue, inspired by human reverse thinking, we propose Ontology-Guided Reverse Thinking (ORT), a novel framework that constructs reasoning paths from purposes back to conditions. ORT operates in three key phases: (1) using LLM to extract purpose labels and condition labels, (2) constructing label reasoning paths based on the KG ontology, and (3) using the label reasoning paths to guide knowledge retrieval. Experiments on the WebQSP and CWQ datasets show that ORT achieves state-of-the-art performance and significantly enhances the capability of LLMs for KGQA.
Problem

Research questions and friction points this paper is trying to address.

Addresses multi-hop reasoning in KGQA tasks
Reduces information loss and redundancy in answers
Enhances LLMs' capability for knowledge graph questions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ontology-Guided Reverse Thinking
Purpose-to-condition reasoning paths
LLM-enhanced knowledge retrieval
🔎 Similar Papers
No similar papers found.
R
Runxuan Liu
Harbin Institute of Technology, Harbin, China
B
Bei Luo
Beijing University of Posts and Telecommunications, Beijing, China
J
Jiaqi Li
Joint Laboratory of HIT and iFLYTEK, Beijing, China; University of Science and Technology of China, Hefei, China
Baoxin Wang
Baoxin Wang
iFLYTEK Research
Large Language ModelsGrammatical Error CorrectionNatural Language Processing
M
Ming Liu
Harbin Institute of Technology, Harbin, China
D
Dayong Wu
Joint Laboratory of HIT and iFLYTEK, Beijing, China
Shijin Wang
Shijin Wang
Tongji University
Schedulingmaintenance
Bing Qin
Bing Qin
Professor in Harbin Institute of Technology
Natural Language ProcessingInformation ExtractionSentiment Analysis