🤖 AI Summary
This paper addresses the joint inference of dynamic network topology and imputation of missing data from partially observed time-varying graph signals. We propose a unified nonconvex optimization framework that simultaneously recovers a sequence of graph Laplacians and unobserved signal values, enabling bidirectional structural–signal co-modeling. The formulation incorporates a Lasso-type temporal regularization to capture gradual topological evolution while ensuring robustness to noise and spurious edge suppression. An efficient ADMM-based algorithm is developed, with closed-form solutions for both the graph structure and signal subproblems. Theoretically, we derive non-asymptotic statistical error bounds and establish convergence of the algorithm to a stationary point. Experiments demonstrate that our method significantly outperforms existing baselines under high missingness rates, achieving superior joint estimation accuracy and faster convergence.
📝 Abstract
This paper tackles the challenging problem of jointly inferring time-varying network topologies and imputing missing data from partially observed graph signals. We propose a unified non-convex optimization framework to simultaneously recover a sequence of graph Laplacian matrices while reconstructing the unobserved signal entries. Unlike conventional decoupled methods, our integrated approach facilitates a bidirectional flow of information between the graph and signal domains, yielding superior robustness, particularly in high missing-data regimes. To capture realistic network dynamics, we introduce a fused-lasso type regularizer on the sequence of Laplacians. This penalty promotes temporal smoothness by penalizing large successive changes, thereby preventing spurious variations induced by noise while still permitting gradual topological evolution. For solving the joint optimization problem, we develop an efficient Alternating Direction Method of Multipliers (ADMM) algorithm, which leverages the problem's structure to yield closed-form solutions for both the graph and signal subproblems. This design ensures scalability to large-scale networks and long time horizons. On the theoretical front, despite the inherent non-convexity, we establish a convergence guarantee, proving that the proposed ADMM scheme converges to a stationary point. Furthermore, we derive non-asymptotic statistical guarantees, providing high-probability error bounds for the graph estimator as a function of sample size, signal smoothness, and the intrinsic temporal variability of the graph. Extensive numerical experiments validate the approach, demonstrating that it significantly outperforms state-of-the-art baselines in both convergence speed and the joint accuracy of graph learning and signal recovery.