🤖 AI Summary
This work addresses efficient decomposition of high-dimensional sparse tensors—common in healthcare and cybersecurity—on modern parallel processors, overcoming restrictive assumptions about mode structure or sparsity distribution inherent in conventional compressed formats. We propose ALTO, an adaptive linearization tensor representation that is agnostic to both mode structure and sparsity distribution. Built upon ALTO, we design a parallel decomposition algorithm featuring low synchronization overhead and high data reuse, augmented by dynamic performance modeling and scheduling heuristics for automatic hardware adaptation. Leveraging cache- and memory-aware optimizations on Intel Xeon Scalable platforms, experiments demonstrate that ALTO achieves over 10× speedup versus the best structure-agnostic format and a 5.1× geometric mean speedup versus the best structure-aware format, while incurring only 25% of the latter’s storage overhead.
📝 Abstract
High-dimensional sparse data emerge in many critical application domains such as healthcare and cybersecurity. To extract meaningful insights from massive volumes of these multi-dimensional data, scientists employ unsupervised analysis tools based on tensor decomposition (TD) methods. However, real-world sparse tensors exhibit highly irregular shapes and data distributions, which pose significant challenges for making efficient use of modern parallel processors. This study breaks the prevailing assumption that compressing sparse tensors into coarse-grained structures or along a particular dimension/mode is more efficient than keeping them in a fine-grained, mode-agnostic form. Our novel sparse tensor representation, Adaptive Linearized Tensor Order (ALTO), encodes tensors in a compact format that can be easily streamed from memory and is amenable to both caching and parallel execution. In contrast to existing compressed tensor formats, ALTO constructs one tensor copy that is agnostic to both the mode orientation and the irregular distribution of nonzero elements. To demonstrate the efficacy of ALTO, we propose a set of parallel TD algorithms that exploit the inherent data reuse of tensor computations to substantially reduce synchronization overhead, decrease memory footprint, and improve parallel performance. Additionally, we characterize the major execution bottlenecks of TD methods on the latest Intel Xeon Scalable processors and introduce dynamic adaptation heuristics to automatically select the best algorithm based on the sparse tensor characteristics. Across a diverse set of real-world data sets, ALTO outperforms the state-of-the-art approaches, achieving more than an order-of-magnitude speedup over the best mode-agnostic formats. Compared to the best mode-specific formats, ALTO achieves 5.1X geometric mean speedup at a fraction (25%) of their storage costs.