Think-on-Graph 2.0: Deep and Faithful Large Language Model Reasoning with Knowledge-guided Retrieval Augmented Generation

📅 2024-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient retrieval depth and completeness in Retrieval-Augmented Generation (RAG) for complex reasoning tasks, this paper proposes ToG-2: a dual-path iterative retrieval framework jointly leveraging knowledge graphs and documents. ToG-2 introduces a novel zero-shot, context–graph tightly coupled iterative mechanism—requiring no model fine-tuning and enabling plug-and-play deployment. It synergistically optimizes entity alignment, multi-hop document retrieval, and graph querying to enable knowledge-guided deep contextual understanding. Experiments across seven knowledge-intensive benchmarks demonstrate state-of-the-art (SOTA) performance on six datasets. GPT-3.5 augmented with ToG-2 significantly outperforms baseline RAG methods. Moreover, LLaMA-2-13B enhanced by ToG-2 achieves reasoning performance comparable to zero-shot GPT-3.5—marking the first empirical validation that small language models, when retrieval-augmented, can approach the zero-shot capability of large models.

Technology Category

Application Category

📝 Abstract
Retrieval-augmented generation (RAG) has improved large language models (LLMs) by using knowledge retrieval to overcome knowledge deficiencies. However, current RAG methods often fall short of ensuring the depth and completeness of retrieved information, which is necessary for complex reasoning tasks. In this work, we introduce Think-on-Graph 2.0 (ToG-2), a hybrid RAG framework that iteratively retrieves information from both unstructured and structured knowledge sources in a tight-coupling manner. Specifically, ToG-2 leverages knowledge graphs (KGs) to link documents via entities, facilitating deep and knowledge-guided context retrieval. Simultaneously, it utilizes documents as entity contexts to achieve precise and efficient graph retrieval. ToG-2 alternates between graph retrieval and context retrieval to search for in-depth clues relevant to the question, enabling LLMs to generate answers. We conduct a series of well-designed experiments to highlight the following advantages of ToG-2: 1) ToG-2 tightly couples the processes of context retrieval and graph retrieval, deepening context retrieval via the KG while enabling reliable graph retrieval based on contexts; 2) it achieves deep and faithful reasoning in LLMs through an iterative knowledge retrieval process of collaboration between contexts and the KG; and 3) ToG-2 is training-free and plug-and-play compatible with various LLMs. Extensive experiments demonstrate that ToG-2 achieves overall state-of-the-art (SOTA) performance on 6 out of 7 knowledge-intensive datasets with GPT-3.5, and can elevate the performance of smaller models (e.g., LLAMA-2-13B) to the level of GPT-3.5's direct reasoning. The source code is available on https://github.com/IDEA-FinAI/ToG-2.
Problem

Research questions and friction points this paper is trying to address.

Enhances LLM reasoning depth via RAG
Combines structured and unstructured knowledge retrieval
Improves knowledge-intensive task performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid RAG framework integrates knowledge graphs
Iterative retrieval from structured and unstructured sources
Training-free, plug-and-play with various LLMs
🔎 Similar Papers
No similar papers found.