Bias-variance Tradeoff in Tensor Estimation

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses denoising of third-order tensors with non-Tucker low-rank structure. To mitigate the bias-variance trade-off imbalance inherent in conventional HOSVD estimation under rank misspecification, we propose an adaptive higher-order SVD (HOSVD) estimator. First, we establish a unified bias-variance analysis framework applicable to any pre-specified Tucker rank, rigorously characterizing the monotonic decrease of bias and increase of variance as the rank grows, and recovering a concise matrix-style low-rank SVD decomposition as a special case. Leveraging a random tensor model and concentration analysis, we precisely quantify the joint impact of noise and approximation error, deriving a tight, high-probability upper bound on the estimation error—governed by noise level, effective parameter dimension, and optimal Tucker approximation error—and achieving minimax-optimal convergence rate.

Technology Category

Application Category

📝 Abstract
We study denoising of a third-order tensor when the ground-truth tensor is not necessarily Tucker low-rank. Specifically, we observe $$ Y=X^ast+Zin mathbb{R}^{p_{1} imes p_{2} imes p_{3}}, $$ where $X^ast$ is the ground-truth tensor, and $Z$ is the noise tensor. We propose a simple variant of the higher-order tensor SVD estimator $widetilde{X}$. We show that uniformly over all user-specified Tucker ranks $(r_{1},r_{2},r_{3})$, $$ | widetilde{X} - X^* |_{ mathrm{F}}^2 = O Big( κ^2 Big{ r_{1}r_{2}r_{3}+sum_{k=1}^{3} p_{k} r_{k} Big} ; + ; ξ_{(r_{1},r_{2},r_{3})}^2Big) quad ext{ with high probability.} $$ Here, the bias term $ξ_{(r_1,r_2,r_3)}$ corresponds to the best achievable approximation error of $X^ast$ over the class of tensors with Tucker ranks $(r_1,r_2,r_3)$; $κ^2$ quantifies the noise level; and the variance term $κ^2 {r_{1}r_{2}r_{3}+sum_{k=1}^{3} p_{k} r_{k}}$ scales with the effective number of free parameters in the estimator $widetilde{X}$. Our analysis achieves a clean rank-adaptive bias--variance tradeoff: as we increase the ranks of estimator $widetilde{X}$, the bias $ξ(r_{1},r_{2},r_{3})$ decreases and the variance increases. As a byproduct we also obtain a convenient bias-variance decomposition for the vanilla low-rank SVD matrix estimators.
Problem

Research questions and friction points this paper is trying to address.

Denoising third-order tensors without Tucker low-rank assumptions
Achieving rank-adaptive bias-variance tradeoff in tensor estimation
Analyzing tensor SVD estimator performance with varying Tucker ranks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes variant of higher-order tensor SVD estimator
Achieves rank-adaptive bias-variance tradeoff analysis
Provides error bound for non-low-rank tensor denoising
🔎 Similar Papers
No similar papers found.
S
Shivam Kumar
Booth School of Business, University of Chicago
H
Haotian Xu
Department of Mathematics and Statistics, Auburn University
Carlos Misael Madrid Padilla
Carlos Misael Madrid Padilla
University of Washington in St. Louis
Yuehaw Khoo
Yuehaw Khoo
University of Chicago
Computational math
O
Oscar Hernan Madrid Padilla
Department of Statistics, University of California, Los Angeles
Daren Wang
Daren Wang
University of California San Diego
Scientific computingApplied linear algebra