🤖 AI Summary
Traditional graph signal processing (GSP) relies on synchronous observations, deterministic modeling, and strict node-wise correspondences—strong assumptions often violated in real-world uncertain graph data. To address these limitations, we propose Graph Distribution-Valued Signals (GDSs), a novel framework that models node signals as probability distributions in the Wasserstein space, thereby generalizing GSP from vector-valued to distribution-valued signals. This paradigm shift unifies and extends classical GSP: deterministic, synchronous signals emerge as special cases; node-level uncertainty and stochasticity are inherently captured; and alignment-free, unpaired distribution filtering becomes feasible. Leveraging optimal transport theory and Wasserstein geometry, we establish a systematic theoretical foundation for distribution-domain graph filtering, representation learning, and inference. Empirical evaluation on prediction tasks demonstrates substantial improvements in filter learning performance and robust modeling of complex, uncertain graph signals.
📝 Abstract
We introduce a novel framework for graph signal processing (GSP) that models signals as graph distribution-valued signals (GDSs), which are probability distributions in the Wasserstein space. This approach overcomes key limitations of classical vector-based GSP, including the assumption of synchronous observations over vertices, the inability to capture uncertainty, and the requirement for strict correspondence in graph filtering. By representing signals as distributions, GDSs naturally encode uncertainty and stochasticity, while strictly generalizing traditional graph signals. We establish a systematic dictionary mapping core GSP concepts to their GDS counterparts, demonstrating that classical definitions are recovered as special cases. The effectiveness of the framework is validated through graph filter learning for prediction tasks, supported by experimental results.