🤖 AI Summary
This work addresses the challenge of multi-hop reasoning over noisy, sparse, or incomplete knowledge graphs, where traditional graph algorithms struggle due to their reliance on explicit edges and static structures. The authors propose INSES, a novel framework that synergistically combines large language model (LLM)-guided dynamic navigation with embedding-based similarity expansion to enable robust reasoning beyond explicit connections. INSES further incorporates a lightweight adaptive routing mechanism that supports variable reasoning depths while maintaining computational efficiency. By integrating elements of both Naïve RAG and GraphRAG architectures, the method achieves significant performance gains over existing approaches across multiple benchmarks, improving accuracy by 5%, 10%, and 27% on knowledge graphs constructed via different methodologies in the MINE benchmark.
📝 Abstract
GraphRAG is increasingly adopted for converting unstructured corpora into graph structures to enable multi-hop reasoning. However, standard graph algorithms rely heavily on static connectivity and explicit edges, often failing in real-world scenarios where knowledge graphs (KGs) are noisy, sparse, or incomplete. To address this limitation, we introduce INSES (Intelligent Navigation and Similarity Enhanced Search), a dynamic framework designed to reason beyond explicit edges. INSES couples LLM-guided navigation, which prunes noise and steers exploration, with embedding-based similarity expansion to recover hidden links and bridge semantic gaps. Recognizing the computational cost of graph reasoning, we complement INSES with a lightweight router that delegates simple queries to Naïve RAG and escalates complex cases to INSES, balancing efficiency with reasoning depth. INSES consistently outperforms SOTA RAG and GraphRAG baselines across multiple benchmarks. Notably, on the MINE benchmark, it demonstrates superior robustness across KGs constructed by varying methods (KGGEN, GraphRAG, OpenIE), improving accuracy by 5%, 10%, and 27%, respectively.