🤖 AI Summary
Existing Graph Neural Architecture Search (GNAS) methods exhibit poor generalizability and high domain expertise requirements, necessitating extensive manual adaptation and code refactoring when transferred to new graph search spaces.
Method: This paper pioneers the systematic integration of Large Language Models (LLMs) into the entire GNAS pipeline—including graph feature engineering, architecture search, and hyperparameter optimization—replacing traditional code modification with prompt engineering to enable zero-shot transfer across diverse graph structures (homogeneous and heterogeneous). The approach synergistically combines LLMs, Graph Neural Networks (GNNs), and AutoML techniques without requiring model code rewriting or low-level framework adjustments.
Contribution/Results: Experimental results on multiple benchmark tasks demonstrate significant performance gains over state-of-the-art GNAS methods. The proposed framework achieves superior scalability, robustness, and reduced dependency on domain expertise, establishing a novel paradigm for automated graph learning.
📝 Abstract
Graph Neural Architecture Search (GNAS) facilitates the automatic design of Graph Neural Networks (GNNs) tailored to specific downstream graph learning tasks. However, existing GNAS approaches often require manual adaptation to new graph search spaces, necessitating substantial code optimization and domain-specific knowledge. To address this challenge, we present LLM4GNAS, a toolkit for GNAS that leverages the generative capabilities of Large Language Models (LLMs). LLM4GNAS includes an algorithm library for graph neural architecture search algorithms based on LLMs, enabling the adaptation of GNAS methods to new search spaces through the modification of LLM prompts. This approach reduces the need for manual intervention in algorithm adaptation and code modification. The LLM4GNAS toolkit is extensible and robust, incorporating LLM-enhanced graph feature engineering, LLM-enhanced graph neural architecture search, and LLM-enhanced hyperparameter optimization. Experimental results indicate that LLM4GNAS outperforms existing GNAS methods on tasks involving both homogeneous and heterogeneous graphs.