Overcomplete Tensor Decomposition via Koszul-Young Flattenings

📅 2024-11-21
🏛️ arXiv.org
📈 Citations: 3
Influential: 1
📄 PDF
🤖 AI Summary
This paper addresses exact CP decomposition and uniqueness verification for high-rank (overcomplete) third-order tensors with dimensions satisfying $n_1 leq n_2 leq n_3$ and $n_3/n_2 = O(1)$. We introduce Koszul–Young flattenings—previously unexploited in tensor decomposition algorithms—to overcome classical rank limitations. For $n imes n imes n$ tensors, our method supports decomposition up to rank $(2-varepsilon)n$, surpassing simultaneous diagonalization ($r leq n$) and recent results by Koiran (2024) and Persu (2018). Leveraging tools from algebraic geometry—specifically polynomial flattenings and genericity analysis—we design a deterministic polynomial-time algorithm that jointly computes the CP decomposition and certifies its uniqueness under genericity assumptions. We prove that the Koszul–Young flattening has rank at most $n_2 + n_3$, and further show that generic tensors exhibit stronger structural identifiability than random ones, enabling higher-rank recovery.

Technology Category

Application Category

📝 Abstract
Motivated by connections between algebraic complexity lower bounds and tensor decompositions, we investigate Koszul-Young flattenings, which are the main ingredient in recent lower bounds for matrix multiplication. Based on this tool we give a new algorithm for decomposing an $n_1 imes n_2 imes n_3$ tensor as the sum of a minimal number of rank-1 terms, and certifying uniqueness of this decomposition. For $n_1 le n_2 le n_3$ with $n_1 o infty$ and $n_3/n_2 = O(1)$, our algorithm is guaranteed to succeed when the tensor rank is bounded by $r le (1-epsilon)(n_2 + n_3)$ for an arbitrary $epsilon>0$, provided the tensor components are generically chosen. For any fixed $epsilon$, the runtime is polynomial in $n_3$. When $n_2 = n_3 = n$, our condition on the rank gives a factor-of-2 improvement over the classical simultaneous diagonalization algorithm, which requires $r le n$, and also improves on the recent algorithm of Koiran (2024) which requires $r le 4n/3$. It also improves on the PhD thesis of Persu (2018) which solves rank detection for $r leq 3n/2$. We complement our upper bounds by showing limitations, in particular that no flattening of the style we consider can surpass rank $n_2 + n_3$. Furthermore, for $n imes n imes n$ tensors, we show that an even more general class of degree-$d$ polynomial flattenings cannot surpass rank $Cn$ for a constant $C = C(d)$. This suggests that for tensor decompositions, the case of generic components may be fundamentally harder than that of random components, where efficient decomposition is possible even in highly overcomplete settings.
Problem

Research questions and friction points this paper is trying to address.

Decomposing tensors into minimal rank-1 terms efficiently
Certifying uniqueness of tensor decomposition with generic components
Improving rank bounds for overcomplete tensor decomposition algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Koszul-Young flattenings for tensor decomposition
Certifies uniqueness of minimal rank-1 decomposition
Handles overcomplete tensors with rank up to n₂+n₃
🔎 Similar Papers
No similar papers found.