🤖 AI Summary
This paper addresses data leakage and computational redundancy arising from preprocessing in partition-based cross-validation. We propose a matrix-algebra-driven acceleration framework that supports model selection for PCA, principal component regression (PCR), and ridge regression. For the first time, we provide provably leakage-free and efficiently verifiable cross-validation algorithms for all 12 essentially distinct combinations of column centering and scaling. Leveraging block-wise preprocessing derivation and validation-set-driven reconstruction of $mathbf{X}^ opmathbf{X}$ and $mathbf{X}^ opmathbf{Y}$, our method reduces both time and space complexity to the order of a single matrix multiplication—scaling independently of the number of folds. An open-source implementation demonstrates that preprocessing overhead remains bounded, numerical precision is preserved, and the framework accommodates all 16 commonly used preprocessing combinations.
📝 Abstract
We present algorithms that substantially accelerate partition‐based cross‐validation for machine learning models that require matrix products and . Our algorithms have applications in model selection for, for example, principal component analysis (PCA), principal component regression (PCR), ridge regression (RR), ordinary least squares (OLS), and partial least squares (PLS). Our algorithms support all combinations of column‐wise centering and scaling of and , and we demonstrate in our accompanying implementation that this adds only a manageable, practical constant over efficient variants without preprocessing. We prove the correctness of our algorithms under a fold‐based partitioning scheme and show that the running time is independent of the number of folds; that is, they have the same time complexity as that of computing and and space complexity equivalent to storing , and . Importantly, unlike alternatives found in the literature, we avoid data leakage due to preprocessing. We achieve these results by eliminating redundant computations in the overlap between training partitions. Concretely, we show how to manipulate and using only samples from the validation partition to obtain the preprocessed training partition‐wise and . To our knowledge, we are the first to derive correct and efficient cross‐validation algorithms for any of the 16 combinations of column‐wise centering and scaling, for which we also prove only 12 give distinct matrix products.