GFM-RAG: Graph Foundation Model for Retrieval Augmented Generation

📅 2025-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional RAG systems struggle to model complex knowledge associations—such as multi-hop reasoning and domain-specific semantics. To address this, we propose the first Graph Foundation Model (GFM) for zero-shot, fine-tuning-free transfer in RAG. With only 8M parameters, GFM undergoes two-stage pretraining on a knowledge graph comprising 60 domains and 14 million triples, alongside 700K documents. Its lightweight graph neural network architecture explicitly captures query–knowledge topological relationships. We further provide the first empirical validation that graph neural networks in RAG adhere to the neural scaling law. Evaluated on three multi-hop QA and seven domain-specific RAG benchmarks, GFM achieves state-of-the-art performance across all tasks, while demonstrating both efficient inference and strong cross-dataset zero-shot generalization.

Technology Category

Application Category

📝 Abstract
Retrieval-augmented generation (RAG) has proven effective in integrating knowledge into large language models (LLMs). However, conventional RAGs struggle to capture complex relationships between pieces of knowledge, limiting their performance in intricate reasoning that requires integrating knowledge from multiple sources. Recently, graph-enhanced retrieval augmented generation (GraphRAG) builds graph structure to explicitly model these relationships, enabling more effective and efficient retrievers. Nevertheless, its performance is still hindered by the noise and incompleteness within the graph structure. To address this, we introduce GFM-RAG, a novel graph foundation model (GFM) for retrieval augmented generation. GFM-RAG is powered by an innovative graph neural network that reasons over graph structure to capture complex query-knowledge relationships. The GFM with 8M parameters undergoes a two-stage training process on large-scale datasets, comprising 60 knowledge graphs with over 14M triples and 700k documents. This results in impressive performance and generalizability for GFM-RAG, making it the first graph foundation model applicable to unseen datasets for retrieval without any fine-tuning required. Extensive experiments on three multi-hop QA datasets and seven domain-specific RAG datasets demonstrate that GFM-RAG achieves state-of-the-art performance while maintaining efficiency and alignment with neural scaling laws, highlighting its potential for further improvement.
Problem

Research questions and friction points this paper is trying to address.

Complex Knowledge Relations
Large Language Models Limitations
Multi-step Question Answering
Innovation

Methods, ideas, or system contributions that make the work stand out.

GFM-RAG
Graph Information Integration
Complex Question Handling
🔎 Similar Papers
No similar papers found.