🤖 AI Summary
Graph Neural Networks (GNNs) exhibit fundamental limitations in learning sparse matrix preconditioners—specifically, their message-passing (MP) mechanism fails to capture non-local dependencies required to approximate high-quality preconditioners (e.g., triangular factorizations) that depend on global matrix structure.
Method: The authors construct the first benchmark suite of synthetic and real-world sparse matrices explicitly designed to stress-test non-local preconditioning requirements; they theoretically and empirically refute the universal approximation capability of MP-GNNs for triangular decomposition; and they rigorously characterize the intrinsic trade-off between MP-GNN expressive power and structural distance in matrices.
Results: Experiments reveal significant approximation bottlenecks for mainstream GNNs on preconditioner learning, establishing critical theoretical boundaries and empirical evidence for the applicability of GNNs in numerical linear algebra.
📝 Abstract
We study fundamental limitations of Graph Neural Networks (GNNs) for learning sparse matrix preconditioners. While recent works have shown promising results using GNNs to predict incomplete factorizations, we demonstrate that the local nature of message passing creates inherent barriers for capturing non-local dependencies required for optimal preconditioning. We introduce a new benchmark dataset of matrices where good sparse preconditioners exist but require non-local computations, constructed using both synthetic examples and real-world matrices. Our experimental results show that current GNN architectures struggle to approximate these preconditioners, suggesting the need for new architectural approaches beyond traditional message passing networks. We provide theoretical analysis and empirical evidence to explain these limitations, with implications for the broader use of GNNs in numerical linear algebra.