Automated Tensor-Relational Decomposition for Large-Scale Sparse Tensor Computation

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of large-scale sparse tensor computations by proposing a tensor-relational hybrid computation model. The model introduces a novel case-sensitive Einstein summation (EinSum) notation that automatically decomposes sparse tensor expressions into a combination of relational algebra operations and dense tensor computations: sparsity is efficiently managed by a relational database system, while dense numerical kernels handle the compute-intensive parts. This approach enables co-optimization of sparse structure handling and high-performance computing, significantly improving overall performance without sacrificing sparse processing efficiency. It represents the first end-to-end automatic compiler capable of translating high-level EinSum expressions into efficient hybrid execution plans.

Technology Category

Application Category

📝 Abstract
A \emph{tensor-relational} computation is a relational computation where individual tuples carry vectors, matrices, or higher-dimensional arrays. An advantage of tensor-relational computation is that the overall computation can be executed on top of a relational system, inheriting the system's ability to automatically handle very large inputs with high levels of sparsity while high-performance kernels (such as optimized matrix-matrix multiplication codes) can be used to perform most of the underlying mathematical operations. In this paper, we introduce upper-case-lower-case \texttt{EinSum}, which is a tensor-relational version of the classical Einstein Summation Notation. We study how to automatically rewrite a computation in Einstein Notation into upper-case-lower-case \texttt{EinSum} so that computationally intensive components are executed using efficient numerical kernels, while sparsity is managed relationally.
Problem

Research questions and friction points this paper is trying to address.

tensor-relational computation
Einstein Summation Notation
sparse tensor
automated decomposition
large-scale computation
Innovation

Methods, ideas, or system contributions that make the work stand out.

tensor-relational decomposition
Einstein Summation
sparse tensor computation
automatic rewriting
relational systems
Y
Yuxin Tang
Rice University
Z
Zhiyuan Xin
Rice University
Z
Zhimin Ding
Rice University
X
Xinyu Yao
Rice University
D
Daniel Bourgeois
Rice University
Tirthak Patel
Tirthak Patel
Rice University
Quantum ComputingHigh Performance ComputingSupercomputing
Chris Jermaine
Chris Jermaine
Associate Professor of Computer Science, Rice University
DatabasesData ManagementData Analytics