🤖 AI Summary
This study systematically characterizes the statistical behavior of eigenvalues of random matrices after min-max normalization. Addressing the critical issue that normalization-induced perturbations distort spectral distributions, we establish, for the first time, the exact scaling law governing the cumulative distribution function of normalized eigenvalues and rigorously derive an analytical expression for the systematic residual error introduced by normalization in low-rank matrix decomposition. Methodologically, we integrate tools from random matrix theory, extreme-value statistics, and matrix perturbation analysis, corroborated by extensive numerical experiments. Results show that normalization significantly alters the asymptotic eigenvalue distribution—quantitatively captured by the derived scaling law—and that the residual error decays deterministically with increasing matrix dimension, revealing a quantifiable impact of preprocessing on spectral methods. This work provides the first theoretical benchmark for spectral analysis under data preprocessing.
📝 Abstract
Random matrix theory has played an important role in various areas of pure mathematics, mathematical physics, and machine learning. From a practical perspective of data science, input data are usually normalized prior to processing. Thus, this study investigates the statistical properties of min-max normalized eigenvalues in random matrices. Previously, the effective distribution for such normalized eigenvalues has been proposed. In this study, we apply it to evaluate a scaling law of the cumulative distribution. Furthermore, we derive the residual error that arises during matrix factorization of random matrices. We conducted numerical experiments to verify these theoretical predictions.