Accelerating Sparse MTTKRP for Small Tensor Decomposition on GPU

📅 2025-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The spMTTKRP (sparse matricized tensor times Khatri-Rao product) operation—a critical bottleneck in sparse tensor decomposition—exhibits low efficiency on GPUs due to irregular memory access and poor load balancing. Method: We propose a mode-customized multi-replica tensor layout and an adaptive tensor partitioning strategy. Our approach introduces the first mode-specific tensor layout, eliminating intermediate-value communication between thread blocks and global memory. It further incorporates a dynamic load-balancing partitioning mechanism guided by sparsity patterns and dimensional characteristics, coupled with SM-level fine-grained scheduling and hardware-accelerated Khatri-Rao product computation. Results: Evaluated on mainstream sparse tensor datasets, our method achieves geometric mean speedups of 2.4× end-to-end, 7.9× for spMTTKRP alone, and 8.9× for small-scale decompositions over the state-of-the-art GPU baselines, significantly advancing end-to-end sparse tensor decomposition performance.

Technology Category

Application Category

📝 Abstract
Sparse Matricized Tensor Times Khatri-Rao Product (spMTTKRP) is the bottleneck kernel of sparse tensor decomposition. In tensor decomposition, spMTTKRP is performed iteratively along all the modes of an input tensor. In this work, we propose a mode-specific tensor layout on GPU that uses multiple tensor copies, where each copy is optimized for a specific mode. The proposed tensor layout increases the data locality of external memory accesses and eliminates the intermediate values communicated between the GPU thread blocks and the GPU global memory. We also propose a tensor partitioning scheme to optimally distribute the total computations among GPU streaming multiprocessors based on the sparsity and the dimensions of the input tensor. Our approach achieves a geometric mean speedup of 2.4x, 7.9x, and 8.9x in total execution time compared with the state-of-the-art GPU baselines.
Problem

Research questions and friction points this paper is trying to address.

Optimizing sparse MTTKRP for GPU acceleration
Improving data locality in tensor decomposition
Enhancing GPU computation distribution for sparse tensors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mode-specific tensor layout on GPU
Multiple optimized tensor copies
Tensor partitioning scheme for sparsity
🔎 Similar Papers
No similar papers found.