Exploring the Potential of Large Language Models as Predictors in Dynamic Text-Attributed Graphs

πŸ“… 2025-03-05
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Large language models (LLMs) face two key challenges in dynamic text-attributed graph forecasting: (1) limited historical context length, and (2) temporal drift of domain-specific features. Method: We propose GraphAgent-Dynamic (GAD), a multi-agent framework featuring global/local temporal summarization agents to model structural evolution and a knowledge-reflection agent for domain-adaptive, incremental knowledge updating. Contribution/Results: GAD is the first work to systematically apply LLMs to zero-shot prediction on dynamic graphs, overcoming constraints of static graph modeling and long-context dependency. It requires no dataset-specific trainingβ€”yet achieves or surpasses fully supervised GNN performance across multiple dynamic graph benchmarks. Moreover, GAD significantly enhances cross-domain transferability and few-shot generalization, demonstrating strong adaptability to evolving graph structures and textual attributes without parameter updates.

Technology Category

Application Category

πŸ“ Abstract
With the rise of large language models (LLMs), there has been growing interest in Graph Foundation Models (GFMs) for graph-based tasks. By leveraging LLMs as predictors, GFMs have demonstrated impressive generalizability across various tasks and datasets. However, existing research on LLMs as predictors has predominantly focused on static graphs, leaving their potential in dynamic graph prediction unexplored. In this work, we pioneer using LLMs for predictive tasks on dynamic graphs. We identify two key challenges: the constraints imposed by context length when processing large-scale historical data and the significant variability in domain characteristics, both of which complicate the development of a unified predictor. To address these challenges, we propose the GraphAgent-Dynamic (GAD) Framework, a multi-agent system that leverages collaborative LLMs. In contrast to using a single LLM as the predictor, GAD incorporates global and local summary agents to generate domain-specific knowledge, enhancing its transferability across domains. Additionally, knowledge reflection agents enable adaptive updates to GAD's knowledge, maintaining a unified and self-consistent architecture. In experiments, GAD demonstrates performance comparable to or even exceeds that of full-supervised graph neural networks without dataset-specific training. Finally, to enhance the task-specific performance of LLM-based predictors, we discuss potential improvements, such as dataset-specific fine-tuning to LLMs. By developing tailored strategies for different tasks, we provide new insights for the future design of LLM-based predictors.
Problem

Research questions and friction points this paper is trying to address.

Exploring LLMs for dynamic graph prediction tasks.
Addressing context length and domain variability challenges.
Proposing GAD Framework for cross-domain transferability.
Innovation

Methods, ideas, or system contributions that make the work stand out.

GraphAgent-Dynamic Framework for dynamic graphs
Multi-agent system with collaborative LLMs
Global and local summary agents enhance transferability
πŸ”Ž Similar Papers
No similar papers found.