A GPU-resident Memory-Aware Algorithm for Accelerating Bidiagonalization of Banded Matrices

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Bidiagonalization of banded matrices—a critical preprocessing step for SVD—has long suffered from memory bandwidth bottlenecks, hindering efficient GPU acceleration. This work introduces the first memory-aware GPU bidiagonalization algorithm, simultaneously overcoming memory capacity and bandwidth limitations. Methodologically, we develop a hardware-aware performance model and jointly optimize L1 cache utilization, block-level concurrency control, and kernel tiling strategies. Leveraging Julia’s Array abstraction and the KernelAbstractions framework, our implementation delivers unified support across NVIDIA, AMD, Intel, and Apple Metal GPUs, with full single- and double-precision compatibility. Experiments demonstrate over 100× speedup over CPU-based PLASMA/SLATE on 32k×32k matrices; performance advantages emerge even at modest scales (1024×1024) and scale linearly with memory bandwidth—achieving order-of-magnitude efficiency gains.

Technology Category

Application Category

📝 Abstract
The reduction of a banded matrix to a bidiagonal form is a crucial step in the Singular Value Decomposition (SVD), a cornerstone of scientific computing and AI. Despite being a highly parallel algorithm, it was previously believed to be unsuitable for GPU computation because it is memory bandwidth-bound. Recent developments in GPU hardware, including larger L1 memory per Streaming Multiprocessor/Compute Unit, have changed that. We present the first GPU algorithm for reducing a banded matrix to bidiagonal form as part of the NextLA.jl open-source software package. Our algorithm is based on previous CPU-based multicore parallel cache-efficient bulge chasing algorithms and adapted to optimize for GPU throughput. We leverage Julia Language's Array abstractions and KernelAbstractions to implement a single hardware- and data precision-agnostic function on NVIDIA, AMD, Intel, and Apple Metal GPUs for half, single, and double precision, and examine performance optimization across hardware architectures and data precision. We also develop a hardware-aware performance model and identify key hyperparameters, such as inner tilewidth and block concurrency, that govern optimal GPU execution for bandwidth-bound workloads. We demonstrate highly parallel bandwidth-bound algorithm on the GPU can outperform CPU-based implementations: the GPU algorithm outperforms multithreaded CPU High-Performance libraries PLASMA and SLATE as of matrix size 1024 x 1024 and by a factor over 100 for matrices of 32k x 32k. In addition, the performance of the algorithm increases linearly with matrix bandwidth size, making faster reduction of larger matrix bandwidths now also possible. With this work, we break memory bandwidth barriers, as well as matrix bandwidth barriers, resulting in orders-of-magnitude faster algorithms for the reduction of banded matrices to bidiagonal form on the GPU.
Problem

Research questions and friction points this paper is trying to address.

Accelerating bidiagonalization of banded matrices on GPUs
Overcoming memory bandwidth limitations in GPU computation
Developing cross-platform GPU algorithm for SVD preprocessing
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPU-resident memory-aware algorithm for bidiagonalization
Optimized bulge chasing adapted for GPU throughput
Hardware-agnostic implementation across multiple GPU platforms
🔎 Similar Papers
No similar papers found.