🤖 AI Summary
High-dimensional unsupervised representation learning suffers from difficulties in uncovering critical structural patterns and lacks interpretability.
Method: This paper proposes a multi-scale graph embedding framework based on spectral graph wavelets. It first proves—within the Paley–Wiener graph signal space—that spectral graph wavelets offer superior smoothness control compared to the Laplacian operator. The framework integrates contrastive learning for manifold-aware embedding and establishes an explicit, invertible mapping between the embedding space and the original feature space.
Contribution/Results: The method enables fully unsupervised quantification and ranking of feature importance while preserving multi-scale expressiveness and interpretability. It achieves significant improvements in clustering performance across multiple benchmark datasets. Notably, it is the first work to empirically validate, under purely unsupervised settings, the synergistic gain between interpretability and downstream task performance.
📝 Abstract
Deriving meaningful representations from complex, high-dimensional data in unsupervised settings is crucial across diverse machine learning applications. This paper introduces a framework for multi-scale graph network embedding based on spectral graph wavelets that employs a contrastive learning approach. We theoretically show that in Paley-Wiener spaces on combinatorial graphs, the spectral graph wavelets operator provides greater flexibility and control over smoothness compared to the Laplacian operator, motivating our approach. An additional key advantage of the proposed embedding is its ability to establish a correspondence between the embedding and input feature spaces, enabling the derivation of feature importance. We validate the effectiveness of our graph embedding framework on multiple public datasets across various downstream tasks, including clustering and unsupervised feature importance.