CoG: Controllable Graph Reasoning via Relational Blueprints and Failure-Aware Refinement over Knowledge Graphs

📅 2026-01-16
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the susceptibility of large language models to noise and structural misalignment in knowledge graph reasoning, which often leads to rigid and error-prone inference. To mitigate this, the authors propose CoG, a training-free dual-process framework inspired by the dual-system theory in cognitive science—combining intuitive and analytical reasoning. CoG employs relational blueprints to impose interpretable soft constraints that guide the reasoning process and activates an evidence-driven backtracking mechanism upon detecting failure, enabling iterative refinement. This approach allows for controllable optimization of the reasoning trajectory and achieves state-of-the-art performance across three standard benchmarks, demonstrating significant improvements in both accuracy and efficiency over existing methods.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable reasoning capabilities but often grapple with reliability challenges like hallucinations. While Knowledge Graphs (KGs) offer explicit grounding, existing paradigms of KG-augmented LLMs typically exhibit cognitive rigidity--applying homogeneous search strategies that render them vulnerable to instability under neighborhood noise and structural misalignment leading to reasoning stagnation. To address these challenges, we propose CoG, a training-free framework inspired by Dual-Process Theory that mimics the interplay between intuition and deliberation. First, functioning as the fast, intuitive process, the Relational Blueprint Guidance module leverages relational blueprints as interpretable soft structural constraints to rapidly stabilize the search direction against noise. Second, functioning as the prudent, analytical process, the Failure-Aware Refinement module intervenes upon encountering reasoning impasses. It triggers evidence-conditioned reflection and executes controlled backtracking to overcome reasoning stagnation. Experimental results on three benchmarks demonstrate that CoG significantly outperforms state-of-the-art approaches in both accuracy and efficiency.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Knowledge Graphs
Reasoning Stagnation
Hallucinations
Cognitive Rigidity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Controllable Graph Reasoning
Relational Blueprints
Failure-Aware Refinement
Knowledge Graphs
Dual-Process Theory
🔎 Similar Papers
No similar papers found.
Y
Yuanxiang Liu
Zhejiang University
S
Songze Li
Zhejiang University
X
Xiaoke Guo
Zhejiang University
Z
Zhaoyan Gong
Zhejiang University
Qifei Zhang
Qifei Zhang
PhD. of Computer Science, Zhejiang University
Cloud ComputingNetworking Information SecurityOperating System
H
Hua-zeng Chen
Zhejiang University
Wen Zhang
Wen Zhang
Zhejiang University
Knowledge graphRepresentation Learning