🤖 AI Summary
To address the challenge of completely imbalanced class labels in network embedding—where certain classes have no labeled nodes—this paper proposes two semi-supervised approaches: the shallow model RSDNE and the graph neural network RECT. RSDNE preserves intra-class compactness and inter-class separability via adaptive similarity constraints; RECT is the first to incorporate class semantic knowledge, jointly modeling node features and multi-label information to enhance discriminability for zero-shot classes. This work constitutes the first systematic study of network embedding under fully imbalanced label settings. Extensive experiments on multiple real-world datasets demonstrate that both methods significantly outperform existing semi-supervised embedding baselines, particularly achieving effective knowledge transfer to unlabeled (zero-shot) classes. Substantial improvements are observed in downstream node classification and clustering tasks.
📝 Abstract
Network embedding, aiming to project a network into a low-dimensional space, is increasingly becoming a focus of network research. Semi-supervised network embedding takes advantage of labeled data, and has shown promising performance. However, existing semi-supervised methods would get unappealing results in the completely-imbalanced label setting where some classes have no labeled nodes at all. To alleviate this, we propose two novel semi-supervised network embedding methods. The first one is a shallow method named RSDNE. Specifically, to benefit from the completely-imbalanced labels, RSDNE guarantees both intra-class similarity and inter-class dissimilarity in an approximate way. The other method is RECT which is a new class of graph neural networks. Different from RSDNE, to benefit from the completely-imbalanced labels, RECT explores the class-semantic knowledge. This enables RECT to handle networks with node features and multi-label setting. Experimental results on several real-world datasets demonstrate the superiority of the proposed methods.