🤖 AI Summary
This paper studies stochastic optimization problems where only noisy function evaluations are accessible, focusing on random search methods. To overcome the limitation of prior work requiring strong smoothness assumptions, we introduce a weaker smoothness condition and establish convergence guarantees under it; we further incorporate translation-invariance principles to systematically balance noise and variance. For the finite-sum setting, we propose the first variance-reduced random search variant leveraging multiple samples per iteration, achieving significantly accelerated convergence. Theoretical analysis shows: (i) convergence is ensured under the weaker smoothness assumption; (ii) under stronger smoothness, tighter convergence rates are attained; and (iii) the variance-reduced variant achieves faster convergence for finite-sum problems. Our results broaden the applicability of random search methods and provide new tools for black-box optimization in noisy environments.
📝 Abstract
We revisit random search for stochastic optimization, where only noisy function evaluations are available. We show that the method works under weaker smoothness assumptions than previously considered, and that stronger assumptions enable improved guarantees. In the finite-sum setting, we design a variance-reduced variant that leverages multiple samples to accelerate convergence. Our analysis relies on a simple translation invariance property, which provides a principled way to balance noise and reduce variance.