🤖 AI Summary
STGNNs are limited in expressive power and generalization due to reliance on fixed, handcrafted graph structures. To address this, we propose the first framework integrating **persistent homology filtering** into an ensemble of graph neural networks for time-series regression. Our method automatically constructs a data-driven, multi-scale collection of graphs by extracting topological features across varying filtration scales, and introduces an **attention-based routing mechanism** to adaptively fuse predictions from multiple GNNs. This enables interpretable, multi-scale, adaptive graph construction and collaborative learning. Extensive experiments demonstrate significant improvements over single-graph STGNN baselines on seismic activity forecasting and traffic benchmarks (PEMS-BAY, METR-LA). Moreover, our approach provides topological interpretability via persistence diagrams, revealing how structural patterns at different scales contribute to predictions.
📝 Abstract
The effectiveness of Spatio-temporal Graph Neural Networks (STGNNs) in time-series applications is often limited by their dependence on fixed, hand-crafted input graph structures. Motivated by insights from the Topological Data Analysis (TDA) paradigm, of which real-world data exhibits multi-scale patterns, we construct several graphs using Persistent Homology Filtration -- a mathematical framework describing the multiscale structural properties of data points. Then, we use the constructed graphs as an input to create an ensemble of Graph Neural Networks. The ensemble aggregates the signals from the individual learners via an attention-based routing mechanism, thus systematically encoding the inherent multiscale structures of data. Four different real-world experiments on seismic activity prediction and traffic forecasting (PEMS-BAY, METR-LA) demonstrate that our approach consistently outperforms single-graph baselines while providing interpretable insights.