Re-visiting Skip-Gram Negative Sampling: Dimension Regularization for More Efficient Dissimilarity Preservation in Graph Embeddings

📅 2024-04-30
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the embedding collapse problem in graph embedding caused by the computational inefficiency of Skip-Gram with Negative Sampling (SGNS). We propose replacing conventional node-level negative sampling with dimension-wise regularization. Theoretically, we prove that SGNS implicitly performs dimension-wise re-centering, and its large-scale convergence is equivalent to optimizing at the dimension level—establishing, for the first time, a rigorous equivalence between SGNS and dimension-wise regularization. Based on this insight, we design a general algorithmic enhancement framework. Our method requires no modification to model architecture or loss function, enabling plug-and-play acceleration of any SGNS-based graph embedding algorithm (e.g., LINE, node2vec). Empirically, it significantly reduces computational overhead while preserving downstream task performance, thus achieving both theoretical rigor and practical deployability.

Technology Category

Application Category

📝 Abstract
A wide range of graph embedding objectives decompose into two components: one that attracts the embeddings of nodes that are perceived as similar, and another that repels embeddings of nodes that are perceived as dissimilar. Because real-world graphs are sparse and the number of dissimilar pairs grows quadratically with the number of nodes, Skip-Gram Negative Sampling (SGNS) has emerged as a popular and efficient repulsion approach. SGNS repels each node from a sample of dissimilar nodes, as opposed to all dissimilar nodes. In this work, we show that node-wise repulsion is, in aggregate, an approximate re-centering of the node embedding dimensions. Such dimension operations are much more scalable than node operations. The dimension approach, in addition to being more efficient, yields a simpler geometric interpretation of the repulsion. Our result extends findings from the self-supervised learning literature to the skip-gram model, establishing a connection between skip-gram node contrast and dimension regularization. We show that in the limit of large graphs, under mild regularity conditions, the original node repulsion objective converges to optimization with dimension regularization. We use this observation to propose an algorithm augmentation framework that speeds up any existing algorithm, supervised or unsupervised, using SGNS. The framework prioritizes node attraction and replaces SGNS with dimension regularization. We instantiate this generic framework for LINE and node2vec and show that the augmented algorithms preserve downstream performance while dramatically increasing efficiency.
Problem

Research questions and friction points this paper is trying to address.

Improving efficiency of graph embedding repulsion methods
Replacing SGNS with scalable dimension regularization
Reducing resource usage while maintaining embedding performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Replaces SGNS with dimension regularization
Improves scalability and reduces resource usage
Maintains performance with simpler geometric interpretation
🔎 Similar Papers
No similar papers found.