🤖 AI Summary
This paper unifies two major nonparametric graph representation learning paradigms: graph layout (2D visualization) and node embedding (high-dimensional representation). To this end, it proposes a unified neighbor embedding framework—graph t-SNE for high-fidelity 2D layouts and graph CNE for task-oriented high-dimensional embeddings. Its core contribution is the first theoretical integration of t-SNE and the InfoNCE loss within a single principled framework, ensuring conceptual consistency between visualization and embedding learning. The method leverages random-walk-based sampling and neighborhood similarity modeling, requiring no complex neural architectures. Experiments demonstrate that it significantly outperforms DeepWalk, node2vec, and force-directed layouts in preserving local structural fidelity, while yielding a more compact, interpretable, and parameter-efficient model.
📝 Abstract
Graph layouts and node embeddings are two distinct paradigms for non-parametric graph representation learning. In the former, nodes are embedded into 2D space for visualization purposes. In the latter, nodes are embedded into a high-dimensional vector space for downstream processing. State-of-the-art algorithms for these two paradigms, force-directed layouts and random-walk-based contrastive learning (such as DeepWalk and node2vec), have little in common. In this work, we show that both paradigms can be approached with a single coherent framework based on established neighbor embedding methods. Specifically, we introduce graph t-SNE, a neighbor embedding method for two-dimensional graph layouts, and graph CNE, a contrastive neighbor embedding method that produces high-dimensional node representations by optimizing the InfoNCE objective. We show that both graph t-SNE and graph CNE strongly outperform state-of-the-art algorithms in terms of local structure preservation, while being conceptually simpler.