Let's Ask GNN: Empowering Large Language Model for Graph In-Context Learning

📅 2024-10-09
🏛️ Conference on Empirical Methods in Natural Language Processing
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the critical challenge that large language models (LLMs) cannot natively process text-attributed graphs (TAGs), this paper proposes AskGNN: a framework that leverages a graph neural network (GNN)-driven, learnable retrieval mechanism to dynamically identify the most discriminative labeled nodes during in-context learning (ICL), thereby guiding LLMs to perform graph classification, node classification, and link prediction. AskGNN is the first method to jointly model structural graph priors and supervision signals, enabling zero-shot, parameter-free adaptation of LLMs to graph tasks without fine-tuning. Extensive experiments across three fundamental graph learning tasks and seven state-of-the-art LLMs demonstrate that AskGNN consistently outperforms diverse baselines. The results validate a novel, efficient, and general paradigm for endowing LLMs with graph awareness—requiring no parameter updates while preserving task-agnostic applicability.

Technology Category

Application Category

📝 Abstract
Textual Attributed Graphs (TAGs) are crucial for modeling complex real-world systems, yet leveraging large language models (LLMs) for TAGs presents unique challenges due to the gap between sequential text processing and graph-structured data. We introduce AskGNN, a novel approach that bridges this gap by leveraging In-Context Learning (ICL) to integrate graph data and task-specific information into LLMs. AskGNN employs a Graph Neural Network (GNN)-powered structure-enhanced retriever to select labeled nodes across graphs, incorporating complex graph structures and their supervision signals. Our learning-to-retrieve algorithm optimizes the retriever to select example nodes that maximize LLM performance on graph. Experiments across three tasks and seven LLMs demonstrate AskGNN's superior effectiveness in graph task performance, opening new avenues for applying LLMs to graph-structured data without extensive fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Language Models
Text Attribute Graphs
Non-Text Data Handling
Innovation

Methods, ideas, or system contributions that make the work stand out.

AskGNN
Graph Neural Networks
Text Attribute Graphs
Z
Zhengyu Hu
Northwestern University
Y
Yichuan Li
Worcester Polytechnic Institute
Z
Zhengyu Chen
MeiTuan
Jingang Wang
Jingang Wang
Meituan
Information RetrievalNatural Language ProcessingMachine Translation
H
Han Liu
Northwestern University
K
Kyumin Lee
Worcester Polytechnic Institute
Kaize Ding
Kaize Ding
Assistant Professor of Stats & Data Science, Northwestern University
Reliable Machine LearningData-Efficient LearningAnomaly/OOD DetectionLLMs and GNNs