🤖 AI Summary
This work addresses offline and online min-max optimization problems where the objective function is non-smooth submodular with respect to the minimization variables and concave in the maximization variables. It introduces, for the first time, a zeroth-order optimization approach tailored to this class of non-smooth submodular–concave min-max problems. The method leverages subgradients of the Lovász extension for the minimization variables and employs Gaussian smoothing to estimate gradients for the maximization variables. In the offline setting, the proposed algorithm provably converges in expectation to an ε-saddle point. In the online setting, it achieves a dual gap bound of O(√(N·P̄_N)), where N denotes the number of rounds and P̄_N characterizes the problem’s temporal variation. Numerical experiments corroborate the theoretical findings, demonstrating the novelty and efficacy of the approach.
📝 Abstract
We consider max-min and min-max problems with objective functions that are possibly non-smooth, submodular with respect to the minimiser and concave with respect to the maximiser. We investigate the performance of a zeroth-order method applied to this problem. The method is based on the subgradient of the Lov\'asz extension of the objective function with respect to the minimiser and based on Gaussian smoothing to estimate the smoothed function gradient with respect to the maximiser. In expectation sense, we prove the convergence of the algorithm to an $\epsilon$-saddle point in the offline case. Moreover, we show that, in the expectation sense, in the online setting, the algorithm achieves $O(\sqrt{N\bar{P}_N})$ online duality gap, where $N$ is the number of iterations and $\bar{P}_N$ is the path length of the sequence of optimal decisions. The complexity analysis and hyperparameter selection are presented for all the cases. The theoretical results are illustrated via numerical examples.