🤖 AI Summary
To address the reliance of unlabeled node classification on extensive manual annotations in text-attributed graphs—and the high query costs or noisy pseudo-labels inherent in existing LLM-assisted approaches—this paper proposes Locle, an active self-training framework. Methodologically, Locle introduces two key innovations: (1) a novel critical-node selection mechanism based on label discordance and entropy, enabling precise identification of nodes requiring LLM intervention; and (2) a GNN-LLM co-designed joint label refinement module coupled with topology rewiring, which enhances pseudo-label quality and structural robustness. Evaluated under zero human annotation, Locle achieves cost-effective self-training: it outperforms state-of-the-art methods across five benchmark datasets, improving accuracy on DBLP by 8.08%, with an average LLM invocation cost per node under $0.01.
📝 Abstract
Graph neural networks (GNNs) have become the preferred models for node classification in graph data due to their robust capabilities in integrating graph structures and attributes. However, these models heavily depend on a substantial amount of high-quality labeled data for training, which is often costly to obtain. With the rise of large language models (LLMs), a promising approach is to utilize their exceptional zero-shot capabilities and extensive knowledge for node labeling. Despite encouraging results, this approach either requires numerous queries to LLMs or suffers from reduced performance due to noisy labels generated by LLMs. To address these challenges, we introduce Locle, an active self-training framework that does Label-free node Classification with LLMs cost-Effectively. Locle iteratively identifies small sets of"critical"samples using GNNs and extracts informative pseudo-labels for them with both LLMs and GNNs, serving as additional supervision signals to enhance model training. Specifically, Locle comprises three key components: (i) an effective active node selection strategy for initial annotations; (ii) a careful sample selection scheme to identify"critical"nodes based on label disharmonicity and entropy; and (iii) a label refinement module that combines LLMs and GNNs with a rewired topology. Extensive experiments on five benchmark text-attributed graph datasets demonstrate that Locle significantly outperforms state-of-the-art methods under the same query budget to LLMs in terms of label-free node classification. Notably, on the DBLP dataset with 14.3k nodes, Locle achieves an 8.08% improvement in accuracy over the state-of-the-art at a cost of less than one cent. Our code is available at https://github.com/HKBU-LAGAS/Locle.