Taxonomy of reduction matrices for Graph Coarsening

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph coarsening reduces graph size via restriction and prolongation operators, but conventional methods enforce them to be pseudo-inverses, yielding high Restricted Spectral Approximation (RSA) error. This work demonstrates that enforcing constraints solely on the prolongation operator suffices to preserve essential structural properties—such as the coarse graph Laplacian—while the restriction operator can be freely designed to further minimize RSA. Accordingly, we propose a generalized dimensionality-reduction matrix paradigm with non-pseudo-inverse coupling, establishing the first systematic, admissible classification framework for restriction matrices. We theoretically characterize the relationships among this framework, RSA error, and structural preservation. Leveraging spectral graph theory and constrained optimization, our method achieves significant RSA reduction across multiple benchmark graphs. In GNN node classification tasks, it improves prediction accuracy on coarsened graphs by up to 3.2%.

Technology Category

Application Category

📝 Abstract
Graph coarsening aims to diminish the size of a graph to lighten its memory footprint, and has numerous applications in graph signal processing and machine learning. It is usually defined using a reduction matrix and a lifting matrix, which, respectively, allows to project a graph signal from the original graph to the coarsened one and back. This results in a loss of information measured by the so-called Restricted Spectral Approximation (RSA). Most coarsening frameworks impose a fixed relationship between the reduction and lifting matrices, generally as pseudo-inverses of each other, and seek to define a coarsening that minimizes the RSA. In this paper, we remark that the roles of these two matrices are not entirely symmetric: indeed, putting constraints on the lifting matrix alone ensures the existence of important objects such as the coarsened graph's adjacency matrix or Laplacian. In light of this, in this paper, we introduce a more general notion of reduction matrix, that is not necessarily the pseudo-inverse of the lifting matrix. We establish a taxonomy of ``admissible'' families of reduction matrices, discuss the different properties that they must satisfy and whether they admit a closed-form description or not. We show that, for a fixed coarsening represented by a fixed lifting matrix, the RSA can be further reduced simply by modifying the reduction matrix. We explore different examples, including some based on a constrained optimization process of the RSA. Since this criterion has also been linked to the performance of Graph Neural Networks, we also illustrate the impact of this choices on different node classification tasks on coarsened graphs.
Problem

Research questions and friction points this paper is trying to address.

Generalizing reduction matrices for graph coarsening beyond pseudo-inverse constraints
Exploring admissible reduction matrices to minimize Restricted Spectral Approximation (RSA)
Evaluating impact of reduction matrix choices on Graph Neural Network performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generalized reduction matrix concept introduced
Taxonomy of admissible reduction matrices established
RSA minimized via optimized reduction matrices
🔎 Similar Papers
No similar papers found.
A
Antonin Joly
CNRS, IRISA, Rennes, FRANCE
Nicolas Keriven
Nicolas Keriven
CNRS, IRISA
Graph Machine LearningRandom GraphsCompressive Sensing
A
Aline Roumy
INRIA, Rennes, FRANCE