LASE: Learned Adjacency Spectral Embeddings

📅 2024-12-23
🏛️ Trans. Mach. Learn. Res.
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key limitations of adjacency spectral embedding (ASE) for graph data—namely, poor interpretability, low parameter efficiency, weak robustness to missing edges, and uncontrolled inference complexity. To this end, we propose LASE, a learnable spectral embedding architecture based on algorithm unrolling. LASE models the gradient-descent-based computation of Laplacian eigenvectors as differentiable graph neural network layers, integrating GCN and sparse-attention GAT modules to enable end-to-end, task-driven discriminative spectral embedding. Its core contributions are threefold: (i) it establishes the first differentiable, parameter-efficient, and robust spectral embedding learning paradigm; (ii) it eliminates the need for precomputed eigendecompositions while ensuring controllable inference complexity; and (iii) it significantly outperforms GNNs augmented with precomputed spectral encodings on link prediction and node classification, and surpasses optimized eigensolvers from scientific computing libraries in approximation accuracy.

Technology Category

Application Category

📝 Abstract
We put forth a principled design of a neural architecture to learn nodal Adjacency Spectral Embeddings (ASE) from graph inputs. By bringing to bear the gradient descent (GD) method and leveraging the principle of algorithm unrolling, we truncate and re-interpret each GD iteration as a layer in a graph neural network (GNN) that is trained to approximate the ASE. Accordingly, we call the resulting embeddings and our parametric model Learned ASE (LASE), which is interpretable, parameter efficient, robust to inputs with unobserved edges, and offers controllable complexity during inference. LASE layers combine Graph Convolutional Network (GCN) and fully-connected Graph Attention Network (GAT) modules, which is intuitively pleasing since GCN-based local aggregations alone are insufficient to express the sought graph eigenvectors. We propose several refinements to the unrolled LASE architecture (such as sparse attention in the GAT module and decoupled layerwise parameters) that offer favorable approximation error versus computation tradeoffs; even outperforming heavily-optimized eigendecomposition routines from scientific computing libraries. Because LASE is a differentiable function with respect to its parameters as well as its graph input, we can seamlessly integrate it as a trainable module within a larger (semi-)supervised graph representation learning pipeline. The resulting end-to-end system effectively learns ``discriminative ASEs'' that exhibit competitive performance in supervised link prediction and node classification tasks, outperforming a GNN even when the latter is endowed with open loop, meaning task-agnostic, precomputed spectral positional encodings.
Problem

Research questions and friction points this paper is trying to address.

Learning nodal adjacency spectral embeddings from graphs
Designing interpretable and efficient graph neural networks
Enabling differentiable spectral embeddings for supervised tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unrolled gradient descent into neural network layers
Combined GCN and GAT modules for spectral embeddings
Differentiable end-to-end trainable spectral embedding pipeline
🔎 Similar Papers
No similar papers found.