Learning Latent Graph Structures and their Uncertainty

📅 2024-05-30
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
In graph neural networks (GNNs), the underlying graph structure is often unknown, and optimizing solely for node prediction loss can induce structural learning bias. Method: This paper proposes a novel framework for jointly learning both the latent graph structure and its uncertainty. Theoretically, we establish—for the first time—that a loss function tailored to stochastic outputs simultaneously ensures optimal learning of the adjacency matrix distribution and predictive performance. Methodologically, we design a differentiable, variational inference–based stochastic graph sampling mechanism, enabling end-to-end joint optimization of graph structure learning, uncertainty quantification, and GNN training. Results: Extensive experiments on multiple benchmark datasets demonstrate significant improvements in prediction accuracy, alongside well-calibrated uncertainty estimates over the inferred graph structure—achieving both theoretical rigor and empirical effectiveness.

Technology Category

Application Category

📝 Abstract
Within a prediction task, Graph Neural Networks (GNNs) use relational information as an inductive bias to enhance the model's accuracy. As task-relevant relations might be unknown, graph structure learning approaches have been proposed to learn them while solving the downstream prediction task. In this paper, we demonstrate that minimization of a point-prediction loss function, e.g., the mean absolute error, does not guarantee proper learning of the latent relational information and its associated uncertainty. Conversely, we prove that a suitable loss function on the stochastic model outputs simultaneously grants (i) the unknown adjacency matrix latent distribution and (ii) optimal performance on the prediction task. Finally, we propose a sampling-based method that solves this joint learning task. Empirical results validate our theoretical claims and demonstrate the effectiveness of the proposed approach.
Problem

Research questions and friction points this paper is trying to address.

Learning latent graph structures from data without direct training signals
Addressing uncertainty in learned graph topologies for better predictions
Proposing a method to jointly learn graph distributions and target predictions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning latent graph structures from data
Using stochastic model outputs for uncertainty
Sampling-based method for joint learning
🔎 Similar Papers
No similar papers found.
A
A. Manenti
The Swiss AI Lab IDSIA USI-SUPSI, Università della Svizzera italiana, Lugano, Switzerland.
Daniele Zambon
Daniele Zambon
The Swiss AI Lab IDSIA, USI
machine learninggraph representation learningtime series analysislearning in nonstationary env
C
C. Alippi
The Swiss AI Lab IDSIA USI-SUPSI, Università della Svizzera italiana, Lugano, Switzerland.; Politecnico di Milano, Milan, Italy.