Dual Reasoning: A GNN-LLM Collaborative Framework for Knowledge Graph Question Answering

๐Ÿ“… 2024-06-03
๐Ÿ“ˆ Citations: 6
โœจ Influential: 2
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address hallucination and imprecise reasoning chains in large language models (LLMs) for knowledge graph question answering (KGQA), this paper proposes a collaborative dual-path reasoning framework that synergistically integrates explicit, structured reasoning via a graph neural network (GNN) and implicit, intuitive reasoning via a frozen LLM. Grounded in dual-process cognitive theory, the GNN actively extracts high-quality reasoning paths, while the LLM performs lightweight decision-making only. We introduce an LLM-augmented GNN training mechanism and a knowledge-enhanced multiple-choice prompting paradigm to enable reasoning-chain distillation and controllable guidance. Evaluated on three KGQA benchmarks, our method achieves state-of-the-art performance, significantly improves inference efficiency, and generates interpretable, high-accuracy intermediate reasoning chainsโ€”thereby jointly enhancing accuracy, efficiency, and transparency.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Models (LLMs) excel at intuitive, implicit reasoning. Guiding LLMs to construct thought chains can enhance their deliberate reasoning abilities, but also faces challenges such as hallucination. Knowledge Graphs (KGs) can provide explicit structured knowledge for LLMs to alleviate these issues. However, existing KG-enhanced methods often overlook explicit graph learning, making it challenging to efficiently provide precise reasoning chains for LLMs. Following dual-process theory, we propose Dual-Reasoning (DualR), a novel framework that integrates an external system based on Graph Neural Network (GNN) for explicit reasoning on KGs, complementing the implicit reasoning of LLMs through externalized reasoning chains. DualR designs an LLM-empowered GNN module for explicit learning on KGs, efficiently extracting high-quality reasoning chains. These reasoning chains are then refined to a knowledge-enhanced multiple-choice prompt, guiding a frozen LLM to reason thoughtfully for final answer determination. Extensive experiments on three benchmark KGQA datasets demonstrate that DualR achieves state-of-the-art performance while maintaining high efficiency and interpretability.
Problem

Research questions and friction points this paper is trying to address.

Enhance LLMs' reasoning with explicit knowledge from KGs.
Integrate GNN for explicit reasoning to complement LLMs' implicit reasoning.
Improve KGQA performance with efficient, interpretable reasoning chains.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates GNN for explicit KG reasoning
LLM-empowered GNN extracts high-quality reasoning chains
Refines chains into knowledge-enhanced prompts for LLM