π€ AI Summary
Existing graph neural architecture search (GNAS) methods rely heavily on manually designed search spaces and domain-specific optimization strategies, imposing high barriers to entry. To address this, we propose the first GPT-4βdriven semantic GNAS framework. Our approach abandons hand-crafted discrete search spaces and explicit optimization policies; instead, it employs customized prompt engineering to guide a large language model (LLM) to iteratively generate executable graph neural network architectures, augmented by lightweight validation feedback for end-to-end automated design. Crucially, we recast architecture search as a natural language reasoning task, drastically reducing dependence on domain expertise. Evaluated on multiple standard graph benchmarks, the automatically generated architectures achieve superior accuracy compared to state-of-the-art GNAS methods, while also demonstrating significantly faster training convergence. These results validate the effectiveness and scalability of our semantic, knowledge-light GNAS paradigm.
π Abstract
Graph Neural Architecture Search (GNAS) has shown promising results in automatically designing graph neural networks. However, GNAS still requires intensive human labor with rich domain knowledge to design the search space and search strategy. In this paper, we integrate GPT-4 into GNAS and propose a new GPT-4 based Graph Neural Architecture Search method (GPT4GNAS for short). The basic idea of our method is to design a new class of prompts for GPT-4 to guide GPT-4 toward the generative task of graph neural architectures. The prompts consist of descriptions of the search space, search strategy, and search feedback of GNAS. By iteratively running GPT-4 with the prompts, GPT4GNAS generates more accurate graph neural networks with fast convergence. Experimental results show that embedding GPT-4 into GNAS outperforms the state-of-the-art GNAS methods.