G-reasoner: Foundation Models for Unified Reasoning over Graph-structured Knowledge

πŸ“… 2025-09-29
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing retrieval-augmented generation (RAG) methods suffer from fragmented information access and weak graph-structured knowledge modeling in knowledge-intensive tasks; graph-enhanced RAG approaches often rely on task-specific graph designs, heuristic search, or costly agent-based pipelines, limiting generalizability and scalability. This paper proposes G-reasoner: a unified framework that introduces QuadGraphβ€”a standardized four-layer graph representation integrating heterogeneous knowledge sources; a lightweight 34M-parameter Graph Foundation Model (GFM) enabling cross-graph generalizable reasoning; and an efficient graph semantic modeling pipeline leveraging mixed-precision training and distributed message passing. Evaluated on six benchmarks, G-reasoner achieves significant improvements over state-of-the-art methods, delivering enhanced reasoning accuracy, superior computational efficiency, and strong cross-graph generalization capability.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) excel at complex reasoning but remain limited by static and incomplete parametric knowledge. Retrieval-augmented generation (RAG) mitigates this by incorporating external knowledge, yet existing RAGs struggle with knowledge-intensive tasks due to fragmented information and weak modeling of knowledge structure. Graphs offer a natural way to model relationships within knowledge, but LLMs are inherently unstructured and cannot effectively reason over graph-structured data. Recent graph-enhanced RAG (GraphRAG) attempts to bridge this gap by constructing tailored graphs and enabling LLMs to reason on them. However, these methods often depend on ad-hoc graph designs, heuristic search, or costly agent pipelines, which hinder scalability and generalization. To address these challenges, we present G-reasoner, a unified framework that integrates graph and language foundation models for reasoning over diverse graph-structured knowledge. Central to our approach is QuadGraph, a standardized four-layer abstraction that unifies heterogeneous knowledge sources into a common graph representation. Building on this, we introduce a 34M-parameter graph foundation model (GFM) that jointly captures graph topology and textual semantics, and is integrated with LLMs to enhance reasoning in downstream applications. To ensure scalability and efficiency, mixed-precision training and distributed message-passing are implemented to scale GFM with more GPUs. Extensive experiments on six benchmarks show that G-reasoner consistently outperforms state-of-the-art baselines, significantly enhances LLM reasoning, and achieves strong efficiency and cross-graph generalization.
Problem

Research questions and friction points this paper is trying to address.

Addressing limitations of static knowledge in large language models
Unifying reasoning over diverse graph-structured knowledge sources
Overcoming scalability and generalization issues in graph-enhanced retrieval systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

QuadGraph abstraction unifies heterogeneous knowledge sources
Graph foundation model jointly captures topology and semantics
Mixed-precision training enables scalable distributed message-passing
πŸ”Ž Similar Papers
No similar papers found.