🤖 AI Summary
Existing graph language models (GLMs) suffer from two key bottlenecks: graph generation relies on manually specified edge distribution assumptions, and text embedding depends heavily on large-scale labeled data. This paper proposes the first unified semi-supervised framework jointly optimizing graph construction and text embedding. Methodologically, it introduces the scale-free property of real-world graphs as a structural prior into GLMs—the first such incorporation—and theoretically proves that k-nearest neighbor (KNN) graphs naturally approximate this property. Leveraging this insight, we design a scale-free-guided graph augmentation and pseudo-labeling mechanism, enabling joint fine-tuning of graph structure learning and linguistic representation. Extensive experiments on multiple benchmarks demonstrate significant gains in semi-supervised performance, validating the effectiveness of the scale-free prior, the structural plausibility of KNN graphs, and the framework’s superiority in reducing reliance on manual assumptions and labeled data.
📝 Abstract
Graph-language models (GLMs) have demonstrated great potential in graph-based semi-supervised learning. A typical GLM consists of two key stages: graph generation and text embedding, which are usually implemented by inferring a latent graph and finetuning a language model (LM), respectively. However, the former often relies on artificial assumptions about the underlying edge distribution, while the latter requires extensive data annotations. To tackle these challenges, this paper introduces a novel GLM that integrates graph generation and text embedding within a unified framework. Specifically, for graph generation, we leverage an inherent characteristic of real edge distribution--the scale-free property--as a structural prior. We unexpectedly find that this natural property can be effectively approximated by a simple k-nearest neighbor (KNN) graph. For text embedding, we develop a graph-based pseudo-labeler that utilizes scale-free graphs to provide complementary supervision for improved LM finetuning. Extensive experiments on representative datasets validate our findings on the scale-free structural approximation of KNN graphs and demonstrate the effectiveness of integrating graph generation and text embedding with a real structural prior. Our code is available at https://github.com/Jianglin954/SFGL.