🤖 AI Summary
This paper addresses the low reliability of stochastic optimizer performance evaluation due to run-to-run variability. We propose a statistically grounded, adaptive experimental design method. First, we theoretically derive a lower bound on the minimum number of independent runs required to guarantee prescribed accuracy for key performance metrics—such as best objective value and convergence iteration count. Building upon this, we design an adaptive sampling algorithm that dynamically determines the requisite number of repetitions, ensuring termination only when both a user-specified confidence level (e.g., 95%) and absolute error tolerance are simultaneously satisfied—thereby avoiding premature stopping or unnecessary resource expenditure. The method integrates confidence interval estimation, hypothesis testing, and sequential sample-size determination, substantially enhancing reproducibility and statistical rigor in optimizer benchmarking and hyperparameter tuning. Empirical evaluation demonstrates that the approach consistently confines estimation error within the prescribed threshold while reducing redundant runs by over 30% on average.
📝 Abstract
A key trait of stochastic optimizers is that multiple runs of the same optimizer in attempting to solve the same problem can produce different results. As a result, their performance is evaluated over several repeats, or runs, on the problem. However, the accuracy of the estimated performance metrics depends on the number of runs and should be studied using statistical tools. We present a statistical analysis of the common metrics, and develop guidelines for experiment design to measure the optimizer's performance using these metrics to a high level of confidence and accuracy. To this end, we first discuss the confidence interval of the metrics and how they are related to the number of runs of an experiment. We then derive a lower bound on the number of repeats in order to guarantee achieving a given accuracy in the metrics. Using this bound, we propose an algorithm to adaptively adjust the number of repeats needed to ensure the accuracy of the evaluated metric. Our simulation results demonstrate the utility of our analysis and how it allows us to conduct reliable benchmarking as well as hyperparameter tuning and prevent us from drawing premature conclusions regarding the performance of stochastic optimizers.