🤖 AI Summary
This work addresses the limitation of existing matrix Bernstein inequalities for symmetric random matrices—namely, their dependence on prior knowledge of the variance—under eigenvalue boundedness assumptions. We propose two tight, closed-form empirical Bernstein inequalities. Methodologically, we are the first to recover, in the dominant $1/sqrt{n}$ term, the constant-level accuracy of the classical matrix Bernstein inequality *without* requiring known variances, thereby adaptively capturing unknown variance structures. Our approach integrates matrix concentration analysis, empirical process theory, and martingale stopping-time techniques to achieve asymptotically optimal deviation control in operator norm. Contributions include: (1) breaking the variance-prior bottleneck, enabling robust, model-free matrix statistical inference; and (2) providing sharp theoretical guarantees for high-dimensional covariance estimation and online learning.
📝 Abstract
We present two sharp, closed-form empirical Bernstein inequalities for symmetric random matrices with bounded eigenvalues. By sharp, we mean that both inequalities adapt to the unknown variance in a tight manner: the deviation captured by the first-order $1/sqrt{n}$ term asymptotically matches the matrix Bernstein inequality exactly, including constants, the latter requiring knowledge of the variance. Our first inequality holds for the sample mean of independent matrices, and our second inequality holds for a mean estimator under martingale dependence at stopping times.