🤖 AI Summary
This work addresses the challenge of modeling functional signals in infinite-dimensional Hilbert spaces. Methodologically, it extends covariance neural networks (VNNs) to the infinite-dimensional setting for the first time, introducing Hilbert-valued covariance filters (HVF) and networks (HVNs). HVNs employ the empirical covariance operator as a learnable filter, incorporate nonlinear activations for hierarchical feature extraction, and adopt a universal discretization scheme adaptable to diverse function spaces. Theoretically, HVNs are proven to exactly recover functional principal component analysis (FPCA). Empirically, HVNs achieve significant performance gains over MLP and FPCA baselines on both synthetic and real-world time-series classification tasks, while demonstrating superior robustness and cross-task transferability.
📝 Abstract
CoVariance Neural Networks (VNNs) perform graph convolutions on the empirical covariance matrix of signals defined over finite-dimensional Hilbert spaces, motivated by robustness and transferability properties. Yet, little is known about how these arguments extend to infinite-dimensional Hilbert spaces. In this work, we take a first step by introducing a novel convolutional learning framework for signals defined over infinite-dimensional Hilbert spaces, centered on the (empirical) covariance operator. We constructively define Hilbert coVariance Filters (HVFs) and design Hilbert coVariance Networks (HVNs) as stacks of HVF filterbanks with nonlinear activations. We propose a principled discretization procedure, and we prove that empirical HVFs can recover the Functional PCA (FPCA) of the filtered signals. We then describe the versatility of our framework with examples ranging from multivariate real-valued functions to reproducing kernel Hilbert spaces. Finally, we validate HVNs on both synthetic and real-world time-series classification tasks, showing robust performance compared to MLP and FPCA-based classifiers.