๐ค AI Summary
This work addresses the reliance on cross-validation for rank selection in low-rank tensor regression by establishing, for the first time under Gaussian random covariate designs, a theoretical connection between the trainingโtest error gap (optimism) and the true tensor rank. Leveraging random matrix theory and expected generalization error analysis, the authors derive prediction-oriented rank selection criteria for both CP and Tucker decompositions that eliminate the need for cross-validation. The proposed framework is further extended to tensor model averaging and neural network compression, demonstrating strong empirical performance on image regression tasks: it substantially reduces model complexity while preserving predictive accuracy.
๐ Abstract
We study rank selection for low-rank tensor regression under random covariates design. Under a Gaussian random-design model and some mild conditions, we derive population expressions for the expected training-testing discrepancy (optimism) for both CP and Tucker decomposition. We further demonstrate that the optimism is minimized at the true tensor rank for both CP and Tucker regression. This yields a prediction-oriented rank-selection rule that aligns with cross-validation and extends naturally to tensor-model averaging. We also discuss conditions under which under- or over-ranked models may appear preferable, thereby clarifying the scope of the method. Finally, we showcase its practical utility on a real-world image regression task and extend its application to tensor-based compression of neural network, highlighting its potential for model selection in deep learning.