🤖 AI Summary
This paper investigates spectral learning of orthogonally decomposable (odeco) tensors, focusing on the interplay among statistical limits, optimization geometry, and initialization. Addressing key challenges—including poor robustness to noise, restrictive reliance on eigenvalue gaps, and computational bottlenecks induced by poor initialization—we propose a novel analytical framework that abandons the conventional eigen-gap assumption. Integrating tensor power iteration, refined perturbation analysis, and nonconvex optimization theory, we establish quantitative links between perturbation bounds and convergence guarantees. We further design an efficient spectral initialization scheme and prove, for the first time, that it achieves statistically optimal recovery. Our core contribution is the revelation that initialization fundamentally determines statistical optimality, and the precise characterization of the intrinsic trade-off between computational feasibility and statistical efficiency in odeco tensor estimation.
📝 Abstract
We study spectral learning for orthogonally decomposable (odeco) tensors, emphasizing the interplay between statistical limits, optimization geometry, and initialization. Unlike matrices, recovery for odeco tensors does not hinge on eigengaps, yielding improved robustness under noise. While iterative methods such as tensor power iterations can be statistically efficient, initialization emerges as the main computational bottleneck. We investigate perturbation bounds, non-convex optimization analysis, and initialization strategies, clarifying when efficient algorithms attain statistical limits and when fundamental barriers remain.