🤖 AI Summary
This paper addresses large-scale dense matrices amenable to hierarchical representations—specifically, strongly admissible H² matrices—and proposes a linear-complexity direct solver tailored for fine-grained parallel architectures. Methodologically, it employs a strongly recursive skeletonization factorization framework, integrating black-box matrix input, prefix-sum-based memory management, and multi-level matrix graph coloring for parallelism—requiring no geometric or analytic prior knowledge. Its key contribution is the first deep integration of recursive skeletonization with the H² format, achieving provably linear O(N) time and memory complexity for both factorization and solution phases. Experimental evaluation on million-scale matrices demonstrates near-perfect linear scalability with 16 threads, substantially reduced dynamic memory overhead, and backward error analysis confirms robust numerical stability.
📝 Abstract
We present factorization and solution phases for a new linear complexity direct solver designed for concurrent batch operations on fine-grained parallel architectures, for matrices amenable to hierarchical representation. We focus on the strong-admissibility-based $mathcal{H}^2$ format, where strong recursive skeletonization factorization compresses remote interactions. We build upon previous implementations of $mathcal{H}^2$ matrix construction for efficient factorization and solution algorithm design, which are illustrated graphically in stepwise detail. The algorithms are ``blackbox'' in the sense that the only inputs are the matrix and right-hand side, without analytical or geometrical information about the origin of the system. We demonstrate linear complexity scaling in both time and memory on four representative families of dense matrices up to one million in size. Parallel scaling up to 16 threads is enabled by a multi-level matrix graph coloring and avoidance of dynamic memory allocations thanks to prefix-sum memory management. An experimental backward error analysis is included. We break down the timings of different phases, identify phases that are memory-bandwidth limited, and discuss alternatives for phases that may be sensitive to the trend to employ lower precisions for performance.