🤖 AI Summary
This paper addresses fine-grained singular subspace estimation for low-rank signal matrices corrupted by Gaussian noise, focusing on the limiting distribution of the maximum Euclidean row norm (i.e., $|cdot|_{2 oinfty}$) quantifying alignment discrepancies between sample and population leading singular vectors. Methodologically, it pioneers the integration of extreme value theory with matrix perturbation analysis to rigorously derive a Gumbel-type asymptotic distribution for this norm and construct a bias-corrected test statistic. The proposed $|cdot|_{2 oinfty}$-based hypothesis testing framework significantly enhances detection power for row-wise or entry-wise sparse structural changes, outperforming conventional Frobenius-norm-based approaches. Theoretically, the test statistic is proven to be asymptotically normal and robust; numerical experiments confirm its strong performance even under non-Gaussian noise. This work provides a novel tool for localized structural signal identification in high-dimensional noisy settings.
📝 Abstract
This paper studies fine-grained singular subspace estimation in the matrix denoising model where a deterministic low-rank signal matrix is additively perturbed by a stochastic matrix of Gaussian noise. We establish that the maximum Euclidean row norm (i.e., the two-to-infinity norm) of the aligned difference between the leading sample and population singular vectors approaches the Gumbel distribution in the large-matrix limit, under suitable signal-to-noise conditions and after appropriate centering and scaling. We apply our novel asymptotic distributional theory to test hypotheses of low-rank signal structure encoded in the leading singular vectors and their corresponding principal subspace. We provide de-biased estimators for the corresponding nuisance signal singular values and show that our proposed plug-in test statistic has desirable properties. Notably, compared to using the Frobenius norm subspace distance, our test statistic based on the two-to-infinity norm has higher power to detect structured alternatives that differ from the null in only a few matrix entries or rows. Our main results are obtained by a novel synthesis of and technical analysis involving entrywise matrix perturbation analysis, extreme value theory, saddle point approximation methods, and random matrix theory. Our contributions complement the existing literature for matrix denoising focused on minimaxity, mean squared error analysis, unitarily invariant distances between subspaces, component-wise asymptotic distributional theory, and row-wise uniform error bounds. Numerical simulations illustrate our main results and demonstrate the robustness properties of our testing procedure to non-Gaussian noise distributions.