CoTKR: Chain-of-Thought Enhanced Knowledge Rewriting for Complex Knowledge Graph Question Answering

📅 2024-09-29
🏛️ Conference on Empirical Methods in Natural Language Processing
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
In complex knowledge graph question answering (KGQA), existing knowledge rewriting methods suffer from imprecise rewriting—introducing noise, omitting critical information, or failing to achieve sufficient semantic alignment. Method: This paper proposes a Chain-of-Thought (CoT)-enhanced progressive rewriting approach that alternates between generating reasoning trajectories and their corresponding subgraph knowledge representations. It further introduces Preference Alignment with Question-Answer Feedback (PAQAF), a novel training strategy that jointly optimizes the rewriter and downstream QA model via reinforcement feedback. Contributions/Results: (1) It establishes the first CoT-guided interleaved knowledge rewriting paradigm; (2) it constructs an integrated framework combining LLMs, RAG, CoT, and reinforcement feedback for knowledge rewriting. Evaluated on multiple KGQA benchmarks, our method significantly improves LLM-based QA accuracy. The generated knowledge representations are empirically validated as the most effective rewriting form to date, consistently outperforming state-of-the-art approaches.

Technology Category

Application Category

📝 Abstract
Recent studies have explored the use of Large Language Models (LLMs) with Retrieval Augmented Generation (RAG) for Knowledge Graph Question Answering (KGQA). They typically require rewriting retrieved subgraphs into natural language formats comprehensible to LLMs. However, when tackling complex questions, the knowledge rewritten by existing methods may include irrelevant information, omit crucial details, or fail to align with the question’s semantics. To address them, we propose a novel rewriting method CoTKR, Chain- of-Thought Enhanced Knowledge Rewriting, for generating reasoning traces and corresponding knowledge in an interleaved manner, thereby mitigating the limitations of single-step knowledge rewriting. Additionally, to bridge the preference gap between the knowledge rewriter and the question answering (QA) model, we propose a training strategy PAQAF, Preference Alignment from Question Answering Feedback, for leveraging feedback from the QA model to further optimize the knowledge rewriter. We conduct experiments using various LLMs across several KGQA benchmarks. Experimental results demonstrate that, compared with previous knowledge rewriting methods, CoTKR generates the most beneficial knowledge representation for QA models, which significantly improves the performance of LLMs in KGQA.
Problem

Research questions and friction points this paper is trying to address.

Enhances knowledge rewriting for complex KGQA tasks
Mitigates irrelevant information and crucial detail omission
Aligns knowledge rewriting with question semantics effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chain-of-Thought Enhanced Knowledge Rewriting (CoTKR)
Preference Alignment from QA Feedback (PAQAF)
Interleaved reasoning traces and knowledge generation
🔎 Similar Papers
No similar papers found.
Y
Yike Wu
Southeast University, Nanjing, Jiangsu, China; Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education
Y
Yi Huang
China Mobile Research Institute, Beijing, China
N
Nan Hu
Southeast University, Nanjing, Jiangsu, China; Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education
Yuncheng Hua
Yuncheng Hua
UNSW Sydney
NLPLLM AgentGenerative AIKBQADialogue System
Guilin Qi
Guilin Qi
Southeast University
Artificial Intelligenceontology
Jiaoyan Chen
Jiaoyan Chen
Department of Computer Science, University of Manchester
Knowledge GraphOntologyMachine LearningLarge Language Model
Jeff Z. Pan
Jeff Z. Pan
Professor of Knowledge Computing, University of Edinburgh
Artificial IntelligenceKnowledge Representation and ReasoningKnowledge Based Learning