Learning to Retrieve and Reason on Knowledge Graph through Active Self-Reflection

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Knowledge graph (KG) reasoning suffers from shallow coupling between retrieval and inference, as well as the absence of fine-grained feedback to support path-level reflection and correction. To address these challenges, we propose ARG, an end-to-end trainable active self-reflection framework. Our method integrates reinforcement learning with supervised learning to jointly optimize three core components: (1) an active knowledge retrieval decision mechanism that dynamically triggers retrieval in a token-controllable manner; (2) a reflective critique module that performs structured assessment and correction of retrieved subgraphs; and (3) an iterative graph traversal inference mechanism that generates highly interpretable and controllable reasoning paths. Empirically, ARG achieves significant improvements over state-of-the-art methods across multiple KG reasoning benchmarks. Crucially, its generated reasoning paths exhibit both strong interpretability—enabling transparent step-by-step justification—and explicit controllability—allowing precise intervention at retrieval or inference stages.

Technology Category

Application Category

📝 Abstract
Extensive research has investigated the integration of large language models (LLMs) with knowledge graphs to enhance the reasoning process. However, understanding how models perform reasoning utilizing structured graph knowledge remains underexplored. Most existing approaches rely on LLMs or retrievers to make binary judgments regarding the utilization of knowledge, which is too coarse. Meanwhile, there is still a lack of feedback mechanisms for reflection and correction throughout the entire reasoning path. This paper proposes an Active self-Reflection framework for knowledge Graph reasoning ARG, introducing for the first time an end-to-end training approach to achieve iterative reasoning grounded on structured graphs. Within the framework, the model leverages special tokens to extit{actively} determine whether knowledge retrieval is necessary, performs extit{reflective} critique based on the retrieved knowledge, and iteratively reasons over the knowledge graph. The reasoning paths generated by the model exhibit high interpretability, enabling deeper exploration of the model's understanding of structured knowledge. Ultimately, the proposed model achieves outstanding results compared to existing baselines in knowledge graph reasoning tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhance reasoning with knowledge graphs
Active self-reflection in model reasoning
End-to-end iterative reasoning training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Active self-Reflection framework
Iterative reasoning on graphs
End-to-end training approach
H
Han Zhang
Center for Applied Statistics, Renmin University of China; School of Statistics, Renmin University of China
L
Langshi Zhou
School of Statistics, Renmin University of China
Hanfang Yang
Hanfang Yang
Assistant professor of statistics, School of Statistics, Renmin university of China