🤖 AI Summary
Large language models (LLMs) rely on static knowledge and exhibit opaque reasoning in knowledge-intensive tasks; existing knowledge graph (KG)-enhanced methods suffer from either granularity mismatch (query-guided approaches) or insufficient contextual utilization (clue-guided approaches). Method: We propose the Guided Graph framework, which constructs an intermediate, structured guided graph that jointly integrates query- and clue-guided exploration. It enables structure-aligned filtering and context-aware pruning to achieve dynamic, interpretable knowledge retrieval. Crucially, target knowledge is abstracted as a semantically coherent guided graph, and contextual constraints are explicitly modeled during retrieval. Contribution/Results: Experiments demonstrate significant improvements over state-of-the-art methods on complex multi-hop question answering, with consistent efficiency and robustness even on medium- and small-scale LLMs. The framework combines theoretical rigor with practical applicability.
📝 Abstract
While Large Language Models (LLMs) exhibit strong linguistic capabilities, their reliance on static knowledge and opaque reasoning processes limits their performance in knowledge intensive tasks. Knowledge graphs (KGs) offer a promising solution, but current exploration methods face a fundamental trade off: question guided approaches incur redundant exploration due to granularity mismatches, while clue guided methods fail to effectively leverage contextual information for complex scenarios. To address these limitations, we propose Guidance Graph guided Knowledge Exploration (GG Explore), a novel framework that introduces an intermediate Guidance Graph to bridge unstructured queries and structured knowledge retrieval. The Guidance Graph defines the retrieval space by abstracting the target knowledge' s structure while preserving broader semantic context, enabling precise and efficient exploration. Building upon the Guidance Graph, we develop: (1) Structural Alignment that filters incompatible candidates without LLM overhead, and (2) Context Aware Pruning that enforces semantic consistency with graph constraints. Extensive experiments show our method achieves superior efficiency and outperforms SOTA, especially on complex tasks, while maintaining strong performance with smaller LLMs, demonstrating practical value.