Hierarchical Precision and Recursion for Accelerating Symmetric Linear Solves on MXUs

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a hierarchical recursive mixed-precision algorithm for the core computations in solving symmetric positive definite linear systems—namely Cholesky decomposition, TRSM, and SYRK. By leveraging a nested recursive structure, the algorithm reformulates Cholesky decomposition into GEMM-dominated subproblems that maximize computational throughput, while preserving FP64 precision on diagonal blocks for numerical stability and employing lower precision on off-diagonal blocks for efficiency. Implemented in Julia with a hardware-agnostic interface, the approach seamlessly integrates NVIDIA Tensor Cores and AMD Matrix Cores. On an H200 GPU, the recursive FP64 SYRK achieves a 14× speedup over cuBLAS, mixed-precision SYRK and TRSM attain 27× and 5× acceleration respectively, and the overall Cholesky solve outperforms cuSOLVER’s FP64 implementation by 5×, delivering two orders of magnitude higher accuracy than pure FP16 while sustaining 88% of peak theoretical performance; comparable results are observed on the MI300X.

Technology Category

Application Category

📝 Abstract
Symmetric linear solves are fundamental to a wide range of scientific and engineering applications, from climate modeling and structural analysis to machine learning and optimization. These workloads often rely on Cholesky (POTRF) decomposition and its supporting operations, triangular solves (TRSM) and symmetric rank-k updates (SYRK), which together form the computational core for solving symmetric positive-definite systems. To accelerate these kernels, we present a portable, mixed-precision solver designed for Matrix Processing Units (MXUs), including NVIDIA Tensor Cores (H200) and AMD Matrix Cores (MI300X). Our algorithm builds on a nested recursive formulation in which Cholesky exposes parallelism through recursive decomposition of its TRSM and SYRK sub-problems. This structure yields a hierarchical recursion that maximizes GEMM throughput while enabling fine-grained control over numerical precision. We introduce a custom recursive data structure that assigns low-precision FP16 arithmetic to large off-diagonal blocks, while preserving high precision on diagonal blocks to ensure numerical stability. The solver is implemented in Julia, leveraging array programming, multiple dispatch, and dynamic type inference to enable seamless expression of mixed-precision computation. This design provides a high-level, hardware-agnostic interface while efficiently interfacing with low-level vendor libraries for backend portability. On H200, our recursive FP64 SYRK achieves a 14x speedup over cuBLAS, while mixed-precision delivers up to 27x speedup in SYRK and 5x in TRSM over full-precision baselines. This results in a 5x overall speedup for Cholesky versus cuSOLVER FP64, with 100x better accuracy than pure FP16 while retaining 88% of its peak speedup. Comparable performance and accuracy trends are observed on MI300X, demonstrating broad applicability across GPUs.
Problem

Research questions and friction points this paper is trying to address.

symmetric linear solves
Cholesky decomposition
mixed-precision
matrix processing units
numerical stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

hierarchical recursion
mixed-precision computing
Cholesky decomposition
matrix processing units
numerical stability
🔎 Similar Papers
No similar papers found.
V
Vicki Carrica
CS and AI Laboratories, Massachusetts Institute of Technology, Cambridge, MA
R
Rabab Alomairy
CS and AI Laboratories, Massachusetts Institute of Technology, Cambridge, MA
Evelyne Ringoot
Evelyne Ringoot
PhD candidate, Massachusetts Institute of Technology
High-Performance Computing Linear Algebra
Alan Edelman
Alan Edelman
Professor of Applied Mathematics, Member Computer Science AI LABS, MIT
CorgisRandom Matrix TheoryJuliaNumerical Linear AlgebraParallel Computing