🤖 AI Summary
This work addresses the limited interpretability and robustness of graph convolutional networks (GCNs) stemming from inadequate uncertainty modeling. To this end, we propose a variational spatial–temporal graph convolutional framework that jointly quantifies predictive uncertainty and layer-wise attention uncertainty. Methodologically, we embed variational inference into the GCN architecture, tightly couple spatial and spatiotemporal graph convolutions, and introduce a learnable, uncertainty-aware attention module. Experiments on Finnish board interlocks, NTU-60/120, and Kinetics action recognition datasets demonstrate that our approach not only improves prediction accuracy but also— for the first time—enables fine-grained uncertainty quantification over intermediate GCN representations (e.g., attention weights). This significantly enhances decision transparency and model trustworthiness, establishing a reliable and interpretable graph learning paradigm for high-stakes applications such as social network analysis and skeleton-based action recognition.
📝 Abstract
Estimation of model uncertainty can help improve the explainability of Graph Convolutional Networks and the accuracy of the models at the same time. Uncertainty can also be used in critical applications to verify the results of the model by an expert or additional models. In this paper, we propose Variational Neural Network versions of spatial and spatio-temporal Graph Convolutional Networks. We estimate uncertainty in both outputs and layer-wise attentions of the models, which has the potential for improving model explainability. We showcase the benefits of these models in the social trading analysis and the skeleton-based human action recognition tasks on the Finnish board membership, NTU-60, NTU-120 and Kinetics datasets, where we show improvement in model accuracy in addition to estimated model uncertainties.