When to Trust: A Causality-Aware Calibration Framework for Accurate Knowledge Graph Retrieval-Augmented Generation

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the overconfidence of existing knowledge graph retrieval-augmented generation (KG-RAG) models when confronted with incomplete or unreliable subgraphs, a critical limitation in high-stakes applications. To mitigate this issue, the authors propose Ca2KG, a novel framework that introduces a causal reasoning perspective into KG-RAG. Ca2KG employs counterfactual prompting to expose the uncertainty inherent in retrieved evidence and incorporates an intervention-based panel rescaling mechanism to jointly calibrate both knowledge quality and reasoning confidence. Experimental results on two complex question-answering benchmarks demonstrate that Ca2KG substantially improves model calibration while maintaining or even enhancing predictive accuracy.

Technology Category

Application Category

📝 Abstract
Knowledge Graph Retrieval-Augmented Generation (KG-RAG) extends the RAG paradigm by incorporating structured knowledge from knowledge graphs, enabling Large Language Models (LLMs) to perform more precise and explainable reasoning. While KG-RAG improves factual accuracy in complex tasks, existing KG-RAG models are often severely overconfident, producing high-confidence predictions even when retrieved sub-graphs are incomplete or unreliable, which raises concerns for deployment in high-stakes domains. To address this issue, we propose Ca2KG, a Causality-aware Calibration framework for KG-RAG. Ca2KG integrates counterfactual prompting, which exposes retrieval-dependent uncertainties in knowledge quality and reasoning reliability, with a panel-based re-scoring mechanism that stabilises predictions across interventions. Extensive experiments on two complex QA datasets demonstrate that Ca2KG consistently improves calibration while maintaining or even enhancing predictive accuracy.
Problem

Research questions and friction points this paper is trying to address.

Knowledge Graph Retrieval-Augmented Generation
overconfidence
calibration
reliability
factual accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causality-aware Calibration
Counterfactual Prompting
Knowledge Graph Retrieval-Augmented Generation
Overconfidence Mitigation
Panel-based Re-scoring
🔎 Similar Papers
No similar papers found.
J
Jing Ren
RMIT University, Melbourne, Australia
Bowen Li
Bowen Li
PhD Candidate at RMIT
Causal artificial intelligence
Ziqi Xu
Ziqi Xu
Lecturer, School of Computing Technologies, RMIT University
Causal AIFairness
X
Xikun Zhang
RMIT University, Melbourne, Australia
H
Haytham Fayek
RMIT University, Melbourne, Australia
X
Xiaodong Li
RMIT University, Melbourne, Australia