🤖 AI Summary
Sparse-dense and sparse-sparse matrix multiplication is a fundamental operator in graph analytics, graph neural networks, and biological sequence alignment—often requiring execution over arbitrary algebraic semirings (including heterogeneous algebras). Existing systems lack unified support for such generalized algebraic semantics. This paper introduces the first unified computational framework for sparse matrix multiplication under generic algebraic semantics. Grounded in semiring theory, it proposes an extensible sparse tensor algebra model supporting user-defined scalar operations, heterogeneous input domains, and operator fusion; it further leverages high-performance sparse storage formats and compiler optimizations. Experimental evaluation across machine learning, computational biology, and scientific computing demonstrates substantial improvements in both expressive power and computational efficiency. The framework establishes a scalable, formally verifiable infrastructure for generalized linear algebra.
📝 Abstract
Multiplication of a sparse matrix with another (dense or sparse) matrix is a fundamental operation that captures the computational patterns of many data science applications, including but not limited to graph algorithms, sparsely connected neural networks, graph neural networks, clustering, and many-to-many comparisons of biological sequencing data.
In many application scenarios, the matrix multiplication takes places on an arbitrary algebraic semiring where the scalar operations are overloaded with user-defined functions with certain properties or a more general heterogenous algebra where even the domains of the input matrices can be different. Here, we provide a unifying treatment of the sparse matrix-matrix operation and its rich application space including machine learning, computational biology and chemistry, graph algorithms, and scientific computing.