Multiscale Hodge Scattering Networks for Data Analysis

📅 2023-11-17
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Feature extraction for signals defined on simplicial complexes remains challenging due to the need for structural invariance and discriminability under simplex reordering. Method: This paper proposes the Multiscale Hodge Scattering Network (MHSN), the first scattering framework integrating multiscale Hodge analysis. It constructs two orthogonal dictionaries—κ-dimensional Generalized Haar–Walsh Transform (κ-GHWT) and κ-dimensional Hierarchical Graph Laplacian Eigen-Transform (κ-HGLET)—and designs a CNN-like hierarchical architecture grounded in Hodge theory. Features are generated via modulus nonlinearity and intrinsic local/scale-wise pooling, ensuring robustness to simplex relabeling. Contribution/Results: MHSN achieves superior invariance and discriminability compared to Morlet- or diffusion-wavelet-based scattering networks. On signal classification, graph/simplicial complex classification, and molecular dynamics prediction tasks, it attains state-of-the-art accuracy using only logistic regression or SVM classifiers, while employing significantly fewer parameters than mainstream graph neural networks.
📝 Abstract
We propose new scattering networks for signals measured on simplicial complexes, which we call emph{Multiscale Hodge Scattering Networks} (MHSNs). Our construction is based on multiscale basis dictionaries on simplicial complexes, i.e., the $kappa$-GHWT and $kappa$-HGLET, which we recently developed for simplices of dimension $kappa in mathbb{N}$ in a given simplicial complex by generalizing the node-based Generalized Haar-Walsh Transform (GHWT) and Hierarchical Graph Laplacian Eigen Transform (HGLET). The $kappa$-GHWT and the $kappa$-HGLET both form redundant sets (i.e., dictionaries) of multiscale basis vectors and the corresponding expansion coefficients of a given signal. Our MHSNs use a layered structure analogous to a convolutional neural network (CNN) to cascade the moments of the modulus of the dictionary coefficients. The resulting features are invariant to reordering of the simplices (i.e., node permutation of the underlying graphs). Importantly, the use of multiscale basis dictionaries in our MHSNs admits a natural pooling operation that is akin to local pooling in CNNs, and which may be performed either locally or per-scale. These pooling operations are harder to define in both traditional scattering networks based on Morlet wavelets, and geometric scattering networks based on Diffusion Wavelets. As a result, we are able to extract a rich set of descriptive yet robust features that can be used along with very simple machine learning methods (i.e., logistic regression or support vector machines) to achieve high-accuracy classification systems with far fewer parameters to train than most modern graph neural networks. Finally, we demonstrate the usefulness of our MHSNs in three distinct types of problems: signal classification, domain (i.e., graph/simplex) classification, and molecular dynamics prediction.
Problem

Research questions and friction points this paper is trying to address.

Analyzing signals measured on simplicial complexes using scattering networks
Creating invariant features for classification tasks on complex data structures
Developing efficient feature extraction with fewer trainable parameters than neural networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multiscale Hodge Scattering Networks for simplicial complex signals
Layered structure cascading modulus moments of dictionary coefficients
Uses multiscale basis dictionaries enabling natural pooling operations
🔎 Similar Papers
No similar papers found.