Min-Max Optimisation for Nonconvex-Nonconcave Functions Using a Random Zeroth-Order Extragradient Algorithm

📅 2025-04-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work studies nonconvex-nonconcave (NC-NC) minimax optimization under three settings: unconstrained, constrained, and nonsmooth (nondifferentiable). To address the limitation of existing methods in jointly handling nonsmoothness and constraints, the authors introduce the *proximal variational inequality* (PVI) framework—incorporating Goldstein stationarity into a unified analytical foundation. Based on PVI, they propose a stochastic Gaussian-smoothed zeroth-order extra-gradient algorithm (ZO-EG), which requires no gradient information and naturally accommodates projection or proximal operators. Theoretically, ZO-EG converges to an ε-stationary neighborhood with finite iteration complexity across all three NC-NC settings; the convergence radius is explicitly controllable, and tight iteration complexity bounds are derived. Key contributions include: (i) the novel PVI modeling paradigm, (ii) unified treatment of nonsmoothness and constraints within a single framework, and (iii) the first zeroth-order algorithm with provable convergence guarantees for general NC-NC minimax problems.

Technology Category

Application Category

📝 Abstract
This study explores the performance of the random Gaussian smoothing Zeroth-Order ExtraGradient (ZO-EG) scheme considering min-max optimisation problems with possibly NonConvex-NonConcave (NC-NC) objective functions. We consider both unconstrained and constrained, differentiable and non-differentiable settings. We discuss the min-max problem from the point of view of variational inequalities. For the unconstrained problem, we establish the convergence of the ZO-EG algorithm to the neighbourhood of an $epsilon$-stationary point of the NC-NC objective function, whose radius can be controlled under a variance reduction scheme, along with its complexity. For the constrained problem, we introduce the new notion of proximal variational inequalities and give examples of functions satisfying this property. Moreover, we prove analogous results to the unconstrained case for the constrained problem. For the non-differentiable case, we prove the convergence of the ZO-EG algorithm to a neighbourhood of an $epsilon$-stationary point of the smoothed version of the objective function, where the radius of the neighbourhood can be controlled, which can be related to the ($delta,epsilon$)-Goldstein stationary point of the original objective function.
Problem

Research questions and friction points this paper is trying to address.

Solving min-max optimization for nonconvex-nonconcave functions
Analyzing convergence of ZO-EG algorithm for constrained problems
Establishing convergence to ε-stationary points in nonsmooth cases
Innovation

Methods, ideas, or system contributions that make the work stand out.

Random Gaussian smoothing ZO-EG algorithm
Handles nonconvex-nonconcave variational inequalities
Proximal variational inequalities for constraints
🔎 Similar Papers
No similar papers found.