The power of small initialization in noisy low-tubal-rank tensor recovery

๐Ÿ“… 2026-03-03
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the problem of low tubal-rank tensor recovery from noisy linear measurements in the over-parameterized regime, where overestimating the tubal rank typically causes the recovery error of factorized gradient descent (FGD) to grow linearly with the estimated rank. Within the t-product framework, the authors propose a small initialization strategy that enables FGD to achieve nearly minimax-optimal recovery even when the tubal rank is severely overestimated. This approach eliminates the dependence of the recovery error on the overestimated rank, yielding the tightest known rank-independent error bound to date. Furthermore, it provides a theoretically grounded early stopping criterion that is practical for implementation. Experiments on both synthetic and real-world data demonstrate that combining small initialization with early stopping achieves optimal recovery performance, with errors remaining stable regardless of the degree of rank overestimation.

Technology Category

Application Category

๐Ÿ“ Abstract
We study the problem of recovering a low-tubal-rank tensor $\mathcal{X}\_\star\in \mathbb{R}^{n \times n \times k}$ from noisy linear measurements under the t-product framework. A widely adopted strategy involves factorizing the optimization variable as $\mathcal{U} * \mathcal{U}^\top$, where $\mathcal{U} \in \mathbb{R}^{n \times R \times k}$, followed by applying factorized gradient descent (FGD) to solve the resulting optimization problem. Since the tubal-rank $r$ of the underlying tensor $\mathcal{X}_\star$ is typically unknown, this method often assumes $r < R \le n$, a regime known as over-parameterization. However, when the measurements are corrupted by some dense noise (e.g., Gaussian noise), FGD with the commonly used spectral initialization yields a recovery error that grows linearly with the over-estimated tubal-rank $R$. To address this issue, we show that using a small initialization enables FGD to achieve a nearly minimax optimal recovery error, even when the tubal-rank $R$ is significantly overestimated. Using a four-stage analytic framework, we analyze this phenomenon and establish the sharpest known error bound to date, which is independent of the overestimated tubal-rank $R$. Furthermore, we provide a theoretical guarantee showing that an easy-to-use early stopping strategy can achieve the best known result in practice. All these theoretical findings are validated through a series of simulations and real-data experiments.
Problem

Research questions and friction points this paper is trying to address.

low-tubal-rank tensor recovery
noisy measurements
over-parameterization
recovery error
dense noise
Innovation

Methods, ideas, or system contributions that make the work stand out.

small initialization
low-tubal-rank tensor recovery
factorized gradient descent
over-parameterization
minimax optimal error
๐Ÿ”Ž Similar Papers
No similar papers found.