Can Large Language Models Analyze Graphs like Professionals? A Benchmark, Datasets and Models

📅 2024-09-29
🏛️ Neural Information Processing Systems
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-based graph analysis benchmarks rely on direct structural reasoning, limiting scalability to large graphs; in contrast, human experts routinely solve such tasks programmatically using graph libraries (e.g., NetworkX, PyTorch Geometric). Method: We propose ProGraph—the first programming-centric benchmark for graph analysis—comprising three expert-level task categories, multi-scale real-world graphs, and six mainstream graph libraries. We further introduce LLMS4Graph, a high-quality dataset featuring authoritative documentation and automatically generated code. To enhance API comprehension and code generation, we employ documentation-augmented retrieval and fine-tune open-source LLMs on this data. Contribution/Results: Our approach yields 11–32% absolute accuracy gains on ProGraph, with the best model achieving 36% accuracy. All components—including the ProGraph benchmark, LLMS4Graph dataset, and enhanced models—are publicly released to advance LLMs’ programmatic understanding of structured graph data.

Technology Category

Application Category

📝 Abstract
The need to analyze graphs is ubiquitous across various fields, from social networks to biological research and recommendation systems. Therefore, enabling the ability of large language models (LLMs) to process graphs is an important step toward more advanced general intelligence. However, current LLM benchmarks on graph analysis require models to directly reason over the prompts describing graph topology, and are thus limited to small graphs with only a few dozens of nodes. In contrast, human experts typically write programs based on popular libraries for task solving, and can thus handle graphs with different scales. To this end, a question naturally arises: can LLMs analyze graphs like professionals? In this paper, we introduce ProGraph, a manually crafted benchmark containing 3 categories of graph tasks. The benchmark expects solutions based on programming instead of directly reasoning over raw inputs. Our findings reveal that the performance of current LLMs is unsatisfactory, with the best model achieving only 36% accuracy. To bridge this gap, we propose LLM4Graph datasets, which include crawled documents and auto-generated codes based on 6 widely used graph libraries. By augmenting closed-source LLMs with document retrieval and fine-tuning open-source ones on the codes, we show 11-32% absolute improvements in their accuracies. Our results underscore that the capabilities of LLMs in handling structured data are still under-explored, and show the effectiveness of LLM4Graph in enhancing LLMs' proficiency of graph analysis. The benchmark, datasets and enhanced open-source models are available at https://github.com/BUPT-GAMMA/ProGraph.
Problem

Research questions and friction points this paper is trying to address.

Evaluate LLMs' graph analysis capabilities
Enhance LLMs with programming-based solutions
Propose datasets for improved graph task accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Augmenting LLMs with document retrieval
Fine-tuning LLMs on auto-generated codes
Introducing ProGraph benchmark for graph tasks
X
Xin Li
Beijing University of Posts and Telecommunications
Weize Chen
Weize Chen
Tsinghua University
NLPML
Q
Qizhi Chu
Beijing University of Posts and Telecommunications
Haopeng Li
Haopeng Li
PhD of Electrical Engineering, KTH Royal Institute of Technology
Mobile Visual Computing and Communication - Video Coding - Mobile Video - Visual Search
Z
Zhaojun Sun
Beijing University of Posts and Telecommunications
R
Ran Li
Tsinghua University
C
Cheng Qian
Tsinghua University
Y
Yiwei Wei
China University of Petroleum at Karamay
Z
Zhiyuan Liu
Tsinghua University
Chuan Shi
Chuan Shi
Beijing University of Posts and Telecommunications
data miningmachine learningsocial network analysis
Maosong Sun
Maosong Sun
Professor of Computer Science and Technology, Tsinghua University
Natural Language ProcessingArtificial IntelligenceSocial Computing
C
Cheng Yang
Beijing University of Posts and Telecommunications