🤖 AI Summary
This study addresses the problem of efficiently estimating the excess risk of large-scale empirical risk minimization models when only black-box access is available. To this end, the authors propose an interleaved resampling-and-refitting algorithm that constructs pseudo-responses from a single training dataset and iteratively generates small synthetic subsets, enabling accurate prediction error estimation without requiring additional validation data or full model retraining. This approach constitutes the first black-box excess risk estimator that operates solely on one dataset and avoids costly full-scale retraining, substantially reducing both computational and data overhead. Theoretical analysis—leveraging randomized residual symmetrization, empirical process theory, and tensor concentration inequalities—establishes high-probability upper bounds on excess risk under both fixed and random design settings, thereby confirming the method’s validity and reliability.
📝 Abstract
We study the problem of evaluating the excess risk of large-scale empirical risk minimization under the square loss. Leveraging the idea of wild refitting and resampling, we assume only black-box access to the training algorithm and develop an efficient procedure for estimating the excess risk. Our evaluation algorithm is both computationally and data efficient. In particular, it requires access to only a single dataset and does not rely on any additional validation data. Computationally, it only requires refitting the model on several much smaller datasets obtained through sequential resampling, in contrast to previous wild refitting methods that require full-scale retraining and might therefore be unsuitable for large-scale trained predictors.
Our algorithm has an interleaved sequential resampling-and-refitting structure. We first construct pseudo-responses through a randomized residual symmetrization procedure. At each round, we thus resample two sub-datasets from the resulting covariate pseudo-response pairs. Finally, we retrain the model separately on these two small artificial datasets. We establish high probability excess risk guarantees under both fixed design and random design settings, showing that with a suitably chosen noise scale, our interleaved resampling and refitting algorithm yields an upper bound on the prediction error. Our theoretical analysis draws on tools from empirical process theory, harmonic analysis, Toeplitz operator theory, and sharp tensor concentration inequalities.