Stochastic Optimization with Random Search

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies stochastic optimization problems where only noisy function evaluations are accessible, focusing on random search methods. To overcome the limitation of prior work requiring strong smoothness assumptions, we introduce a weaker smoothness condition and establish convergence guarantees under it; we further incorporate translation-invariance principles to systematically balance noise and variance. For the finite-sum setting, we propose the first variance-reduced random search variant leveraging multiple samples per iteration, achieving significantly accelerated convergence. Theoretical analysis shows: (i) convergence is ensured under the weaker smoothness assumption; (ii) under stronger smoothness, tighter convergence rates are attained; and (iii) the variance-reduced variant achieves faster convergence for finite-sum problems. Our results broaden the applicability of random search methods and provide new tools for black-box optimization in noisy environments.

Technology Category

Application Category

📝 Abstract
We revisit random search for stochastic optimization, where only noisy function evaluations are available. We show that the method works under weaker smoothness assumptions than previously considered, and that stronger assumptions enable improved guarantees. In the finite-sum setting, we design a variance-reduced variant that leverages multiple samples to accelerate convergence. Our analysis relies on a simple translation invariance property, which provides a principled way to balance noise and reduce variance.
Problem

Research questions and friction points this paper is trying to address.

Random search for stochastic optimization with noisy evaluations
Weaker smoothness assumptions enable improved convergence guarantees
Variance-reduced variant accelerates convergence in finite-sum settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Random search under weaker smoothness assumptions
Variance-reduced variant using multiple samples
Translation invariance property balances noise
🔎 Similar Papers
No similar papers found.
E
El Mahdi Chayti
Machine Learning and Optimization Laboratory (MLO), EPFL
T
Taha El Bakkali El Kadi
UM6P College of Computing
Omar Saadi
Omar Saadi
UM6P College of Computing
Martin Jaggi
Martin Jaggi
EPFL
Machine LearningOptimization