🤖 AI Summary
This work addresses the problem of efficiently multiplying 0/1 matrices of bounded twin-width with vectors or other matrices, without prior knowledge of their twin-width value or a specific row/column ordering. We introduce a novel preprocessing technique that combines twin-width–based matrix decomposition with efficient data structures, achieving near-linear-time matrix–vector multiplication for the first time without requiring the input to be d-twin-ordered. The algorithm substantially outperforms earlier approaches based on first-order model checking and extends robustly to settings with adversarial perturbations. Specifically, preprocessing runs in $\tilde{O}_d(n^2)$ time and matrix–vector multiplication in $\tilde{O}_d(n)$ time; if the matrix is already d-twin-ordered, these improve to $O(n^2 + dn)$ and $O(dn)$, respectively. Moreover, multiplying two $n \times n$ matrices—one of which is a bounded twin-width 0/1 matrix—can be performed in $\tilde{O}(n^2)$ time.
📝 Abstract
Matrix multiplication is a fundamental task in almost all computational fields, including machine learning and optimization, computer graphics, signal processing, and graph algorithms (static and dynamic). Twin-width is a natural complexity measure of matrices (and more general structures) that has recently emerged as a unifying concept with important algorithmic applications. While the twin-width of a matrix is invariant to re-ordering rows and columns, most of its algorithmic applications to date assume that the input is given in a certain canonical ordering that yields a bounded twin-width contraction sequence. In general, efficiently finding such a sequence -- even for an approximate twin-width value -- remains a central and elusive open question.
In this paper we show that a binary $n \times n$ matrix of twin-width $d$ can be preprocessed in $\widetilde{\mathcal{O}}_d(n^2)$ time, so that its product with any vector can be computed in $\widetilde{\mathcal{O}}_d(n)$ time. Notably, the twin-width of the input matrix need not be known and no particular ordering of its rows and columns is assumed. If a canonical ordering is available, i.e., if the input matrix is $d$-twin-ordered, then the runtime of preprocessing and matrix-vector products can be further reduced to $\mathcal{O}(n^2+dn)$ and $\mathcal{O}(dn)$.
Consequently, we can multiply two $n \times n$ matrices in $\widetilde{\mathcal{O}}(n^2)$ time, when at least one of the matrices consists of 0/1 entries and has bounded twin-width. The results also extend to the case of bounded twin-width matrices with adversarial corruption. Our algorithms are significantly faster and simpler than earlier methods that involved first-order model checking and required both input matrices to be $d$-twin-ordered.