π€ AI Summary
This study addresses the challenge that large language models often struggle to perform faithful and reliable reasoning when confronted with conflicts between heterogeneous external knowledge sources, such as textual corpora and knowledge graphs. To systematically investigate this cross-source knowledge conflict problem, the work introduces ConflictQA, a novel benchmark for evaluating model performance in such scenarios, and proposes XoT, a two-stage explainable chain-of-thought framework. XoT integrates retrieval-augmented generation, knowledge graph reasoning, and prompt engineering to effectively fuse heterogeneous knowledge and resolve conflicts. Experimental results demonstrate that mainstream large language models suffer significant performance degradation under cross-source conflicts and are highly susceptible to prompt variations, whereas the XoT framework substantially improves both reasoning accuracy and the reliability of evidence selection.
π Abstract
Large language models (LLMs) have achieved remarkable success across a wide range of applications especially when augmented by external knowledge through retrieval-augmented generation (RAG). Despite their widespread adoption, recent studies have shown that LLMs often struggle to perform faithful reasoning when conflicting knowledge is retrieved. However, existing work primarily focuses on conflicts between external knowledge and the parametric knowledge of LLMs, leaving conflicts across external knowledge largely unexplored. Meanwhile, modern RAG systems increasingly emphasize the integration of unstructured text and (semi-)structured data like knowledge graphs (KGs) to improve knowledge completeness and reasoning faithfulness. To address this gap, we introduce ConflictQA, a novel benchmark that systematically instantiates conflicts between textual evidence and KG evidence. Extensive evaluations across representative LLMs reveal that, facing such cross-source conflicts, LLMs often fail to identify reliable evidence for correct reasoning. Instead, LLMs become more sensitive to prompting choices and tend to rely exclusively on either KG or textual evidence, resulting in incorrect responses. Based on these findings, we further propose XoT, a two-stage explanation-based thinking framework tailored for reasoning over heterogeneous conflicting evidence, and verify its effectiveness with extensive experiments.