AgentGL: Towards Agentic Graph Learning with LLMs via Reinforcement Learning

๐Ÿ“… 2026-04-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing agent frameworks struggle to effectively model the graph-topological dependencies inherent in real-world data, often treating external information as unstructured text. This work proposes Agentic Graph Learning (AGL), a novel paradigm that reframes graph learning as a synergistic process between topology-aware navigation and large language model (LLM) reasoning. We introduce AgentGL, the first reinforcement learningโ€“based framework for AGL, which incorporates graph-native tools to enable multi-scale exploration, employs search constraints to regulate tool invocation, and features a graph-conditioned curriculum reinforcement learning strategy to achieve stable long-horizon training without step-by-step supervision. Evaluated on multiple textual attributed graph benchmarks, AgentGL substantially outperforms GraphLLM and GraphRAG, achieving performance gains of up to 17.5% in node classification and 28.4% in link prediction.
๐Ÿ“ Abstract
Large Language Models (LLMs) increasingly rely on agentic capabilities-iterative retrieval, tool use, and decision-making-to overcome the limits of static, parametric knowledge. Yet existing agentic frameworks treat external information as unstructured text and fail to leverage the topological dependencies inherent in real-world data. To bridge this gap, we introduce Agentic Graph Learning (AGL), a paradigm that reframes graph learning as an interleaved process of topology-aware navigation and LLM-based inference. Specifically, we propose AgentGL, the first reinforcement learning (RL)-driven framework for AGL. AgentGL equips an LLM agent with graph-native tools for multi-scale exploration, regulates tool usage via search-constrained thinking to balance accuracy and efficiency, and employs a graph-conditioned curriculum RL strategy to stabilize long-horizon policy learning without step-wise supervision. Across diverse Text-Attributed Graph (TAG) benchmarks and multiple LLM backbones, AgentGL substantially outperforms strong GraphLLMs and GraphRAG baselines, achieving absolute improvements of up to 17.5% in node classification and 28.4% in link prediction. These results demonstrate that AGL is a promising frontier for enabling LLMs to autonomously navigate and reason over complex relational environments. The code is publicly available at https://github.com/sunyuanfu/AgentGL.
Problem

Research questions and friction points this paper is trying to address.

Agentic Graph Learning
Large Language Models
Graph Reasoning
Topological Dependencies
Relational Environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agentic Graph Learning
Reinforcement Learning
Large Language Models
Graph Reasoning
Tool-Augmented LLMs
๐Ÿ”Ž Similar Papers