🤖 AI Summary
Existing GNNs struggle with node-level information imbalance and modeling long-range semantic dependencies. To address these challenges, we propose ReaGAN: a novel graph neural framework that models each node as an intelligent agent endowed with memory and reasoning capabilities. ReaGAN employs a retrieval-augmented generation (RAG) mechanism to dynamically retrieve global semantic knowledge, enabling adaptive, context-aware message passing. Crucially, it integrates a frozen large language model (LLM) to support few-shot local planning and dynamic aggregation without fine-tuning. This design seamlessly unifies local structural inductive bias with global semantic understanding. As a result, ReaGAN significantly enhances generalization in identifying information-scarce nodes, capturing long-range dependencies, and performing context learning. Extensive experiments on multiple graph learning benchmarks demonstrate that ReaGAN substantially outperforms conventional fixed-aggregation GNNs under few-shot settings.
📝 Abstract
Graph Neural Networks (GNNs) have achieved remarkable success in graph-based learning by propagating information among neighbor nodes via predefined aggregation mechanisms. However, such fixed schemes often suffer from two key limitations. First, they cannot handle the imbalance in node informativeness -- some nodes are rich in information, while others remain sparse. Second, predefined message passing primarily leverages local structural similarity while ignoring global semantic relationships across the graph, limiting the model's ability to capture distant but relevant information. We propose Retrieval-augmented Graph Agentic Network (ReaGAN), an agent-based framework that empowers each node with autonomous, node-level decision-making. Each node acts as an agent that independently plans its next action based on its internal memory, enabling node-level planning and adaptive message propagation. Additionally, retrieval-augmented generation (RAG) allows nodes to access semantically relevant content and build global relationships in the graph. ReaGAN achieves competitive performance under few-shot in-context settings using a frozen LLM backbone without fine-tuning, showcasing the potential of agentic planning and local-global retrieval in graph learning.