Minimisation of Submodular Functions Using Gaussian Zeroth-Order Random Oracles

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses submodular function minimization in black-box optimization settings where explicit gradients are unavailable. We propose the first algorithmic framework integrating Gaussian smoothing with zero-order stochastic estimation for submodular minimization, enabling both offline and online solutions via gradient-free stochastic approximation. Theoretically, our method converges to an ε-approximate global optimum in polynomial time in the offline setting; in the online setting, it achieves Hannan consistency and attains a dynamic regret bound of O(√(N P_N^*)), where P_N^* denotes the path-length. Our approach unifies Gaussian smoothing, zero-order optimization, and static/dynamic regret analysis. Numerical experiments validate its effectiveness and robustness across both offline and online scenarios. The key contribution lies in establishing, for the first time, a rigorous theoretical connection between zero-order stochastic optimization and submodular structure—overcoming classical limitations requiring either gradient access or restrictive structural assumptions.

Technology Category

Application Category

📝 Abstract
We consider the minimisation problem of submodular functions and investigate the application of a zeroth-order method to this problem. The method is based on exploiting a Gaussian smoothing random oracle to estimate the smoothed function gradient. We prove the convergence of the algorithm to a global $ε$-approximate solution in the offline case and show that the algorithm is Hannan-consistent in the online case with respect to static regret. Moreover, we show that the algorithm achieves $O(sqrt{NP_N^ast})$ dynamic regret, where $N$ is the number of iterations and $P_N^ast$ is the path length. The complexity analysis and hyperparameter selection are presented for all the cases. The theoretical results are illustrated via numerical examples.
Problem

Research questions and friction points this paper is trying to address.

Minimizing submodular functions via zeroth-order optimization methods
Proving convergence to global approximate solutions with guarantees
Analyzing static and dynamic regret bounds for online cases
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Gaussian smoothing for gradient estimation
Achieves convergence to global approximate solution
Provides dynamic regret bounds for optimization
🔎 Similar Papers
No similar papers found.