Binsparse: A Specification for Cross-Platform Storage of Sparse Matrices and Tensors

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing sparse matrix/tensor formats (e.g., Matrix Market, FROSTT) rely on ASCII text, resulting in storage redundancy and inefficient parsing—bottlenecks for sparse computation performance. This paper introduces Binsparse: the first general-purpose, modular, embeddable, cross-platform binary sparse data specification. It synergistically combines JSON metadata (encoding dimensions, data types, and layout semantics) with native binary arrays (supporting CSR, CSC, COO, etc.) and natively interoperates with modern container formats including HDF5, Zarr, and NPZ. On SuiteSparse and FROSTT benchmarks, an HDF5-based CSR implementation achieves, without compression, 2.4× smaller file size, 26.5× faster read throughput, and 31× faster write throughput versus ASCII formats; with compression, it attains 7.5× smaller size, 2.6× faster reads, and 1.4× faster writes. The open-source project provides reference implementations in multiple programming languages, bridging the gap between computational efficiency and cross-platform portability.

Technology Category

Application Category

📝 Abstract
Sparse matrices and tensors are ubiquitous throughout multiple subfields of computing. The widespread usage of sparse data has inspired many in-memory and on-disk storage formats, but the only widely adopted storage specifications are the Matrix Market and FROSTT file formats, which both use ASCII text. Due to the inefficiency of text storage, these files typically have larger file sizes and longer parsing times than binary storage formats, which directly store an in-memory representation to disk. This can be a major bottleneck; since sparse computation is often bandwidth-bound, the cost of loading or storing a matrix to disk often exceeds the cost of performing a sparse computation. While it is common practice for practitioners to develop their own, custom, non-portable binary formats for high-performance sparse matrix storage, there is currently no cross-platform binary sparse matrix storage format. We present Binsparse, a cross-platform binary sparse matrix and tensor format specification. Binsparse is a modular, embeddable format, consisting of a JSON descriptor, which describes the matrix or tensor dimensions, type, and format, and a series of binary arrays, which can be stored in all modern binary containers, such as HDF5, Zarr, or NPZ. We provide several reference implementations of Binsparse spanning 5 languages, 5 frameworks, and 4 binary containers. We evaluate our Binsparse format on every matrix in the SuiteSparse Matrix Collection and a selection of tensors from the FROSTT collection. The Binsparse HDF5 CSR format shows file size reductions of 2.4x on average without compression and 7.5x with compression. We evaluate our parser's read/write performance against a state-of-the-art Matrix Market parser, demonstrating warm cache mean read speedups of 26.5x without compression and 2.6x with compression, and write speedups of 31x without compression and 1.4x with compression.
Problem

Research questions and friction points this paper is trying to address.

Lack of cross-platform binary sparse matrix storage format
Inefficient text storage increases file size and parsing time
High cost of loading/storing matrices exceeds computation cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-platform binary sparse matrix storage
Modular format with JSON descriptor
Efficient binary arrays in HDF5, Zarr, NPZ
🔎 Similar Papers
No similar papers found.
Benjamin Brock
Benjamin Brock
Intel Labs, Parallel Computing Lab
Parallel ComputingProgramming Languages
Willow Ahrens
Willow Ahrens
Assistant Professor at Georgia Tech
Programming LanguagesHigh Performance ComputingSparse Linear AlgebraCompilers
H
Hameer Abbasi
Quansight, Beaufort, Luxembourg
T
Timothy A. Davis
Texas A&M University, College Station, TX, USA
J
Juni Kim
Massachusetts Institute of Technology, Cambridge, MA, USA
J
James Kitchen
Anaconda, Austin, TX, USA
S
Spencer Patty
Intel Corporation, Forest Grove, OR, USA
Isaac Virshup
Isaac Virshup
Software Engineer, Helmholtz Munich
bioinformaticsmachine learning
E
Erik Welch
NVIDIA, Austin, TX, USA