Fast Partition‐Based Cross‐Validation With Centering and Scaling for XTX$$ {mathbf{X}}^{mathbf{T}}mathbf{X} $$ and XTY$$ {mathbf{X}}^{mathbf{T}}mathbf{Y} $$

📅 2024-01-24
🏛️ Journal of Chemometrics
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses data leakage and computational redundancy arising from preprocessing in partition-based cross-validation. We propose a matrix-algebra-driven acceleration framework that supports model selection for PCA, principal component regression (PCR), and ridge regression. For the first time, we provide provably leakage-free and efficiently verifiable cross-validation algorithms for all 12 essentially distinct combinations of column centering and scaling. Leveraging block-wise preprocessing derivation and validation-set-driven reconstruction of $mathbf{X}^ opmathbf{X}$ and $mathbf{X}^ opmathbf{Y}$, our method reduces both time and space complexity to the order of a single matrix multiplication—scaling independently of the number of folds. An open-source implementation demonstrates that preprocessing overhead remains bounded, numerical precision is preserved, and the framework accommodates all 16 commonly used preprocessing combinations.

Technology Category

Application Category

📝 Abstract
We present algorithms that substantially accelerate partition‐based cross‐validation for machine learning models that require matrix products and . Our algorithms have applications in model selection for, for example, principal component analysis (PCA), principal component regression (PCR), ridge regression (RR), ordinary least squares (OLS), and partial least squares (PLS). Our algorithms support all combinations of column‐wise centering and scaling of and , and we demonstrate in our accompanying implementation that this adds only a manageable, practical constant over efficient variants without preprocessing. We prove the correctness of our algorithms under a fold‐based partitioning scheme and show that the running time is independent of the number of folds; that is, they have the same time complexity as that of computing and and space complexity equivalent to storing , and . Importantly, unlike alternatives found in the literature, we avoid data leakage due to preprocessing. We achieve these results by eliminating redundant computations in the overlap between training partitions. Concretely, we show how to manipulate and using only samples from the validation partition to obtain the preprocessed training partition‐wise and . To our knowledge, we are the first to derive correct and efficient cross‐validation algorithms for any of the 16 combinations of column‐wise centering and scaling, for which we also prove only 12 give distinct matrix products.
Problem

Research questions and friction points this paper is trying to address.

Accelerate cross-validation for matrix-based ML models
Support centering and scaling without data leakage
Reduce redundant computations in training partitions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Accelerates cross-validation via matrix product optimization
Supports all centering and scaling combinations efficiently
Prevents data leakage by eliminating redundant computations
🔎 Similar Papers
No similar papers found.