🤖 AI Summary
This work addresses the joint modeling of causal relationships and nonlinear dynamic dependencies among nodes, along with topology inference, in dynamic graph scenarios—such as brain networks, transportation systems, and financial markets—where the underlying graph structure is unknown. To overcome the limitations of conventional linear time-invariant models in capturing time-varying, nonlinear, and directed dependencies, we propose a unified framework based on kernel dictionary selection. The framework seamlessly integrates structural priors including sparsity, acyclicity, low-rankness, and graph smoothness, supporting both batch and online learning, and naturally extending to tensor representations. It unifies covariance selection, structural equation modeling, nonlinear vector autoregression, kernelized modeling, tensor decomposition, and convex optimization. Theoretically guaranteed convergence is established. Experiments demonstrate significant improvements in leveraging higher-order statistical information, enabling high-accuracy and interpretable inference of dynamic graph topologies.
📝 Abstract
Topology identification and inference of processes evolving over graphs arise in timely applications involving brain, transportation, financial, power, as well as social and information networks. This chapter provides an overview of graph topology identification and statistical inference methods for multidimensional relational data. Approaches for undirected links connecting graph nodes are outlined, going all the way from correlation metrics to covariance selection, and revealing ties with smooth signal priors. To account for directional (possibly causal) relations among nodal variables and address the limitations of linear time-invariant models in handling dynamic as well as nonlinear dependencies, a principled framework is surveyed to capture these complexities through judiciously selected kernels from a prescribed dictionary. Generalizations are also described via structural equations and vector autoregressions that can exploit attributes such as low rank, sparsity, acyclicity, and smoothness to model dynamic processes over possibly time-evolving topologies. It is argued that this approach supports both batch and online learning algorithms with convergence rate guarantees, is amenable to tensor (that is, multi-way array) formulations as well as decompositions that are well-suited for multidimensional network data, and can seamlessly leverage high-order statistical information.