🤖 AI Summary
This work studies nonconvex–(strongly) concave minimax optimization with coupled linear constraints. We propose the first single-loop primal-dual algorithm with theoretical guarantees for both deterministic and stochastic zeroth-order (gradient-free) settings. Our method integrates zeroth-order gradient estimation, alternating projection updates, regularization-based momentum, and specialized techniques to handle constraint coupling. Theoretically, we establish the first iteration complexity bounds for this problem class: $O(varepsilon^{-2})$ in the deterministic setting and $ ilde{O}(varepsilon^{-3})$ in the stochastic setting—significantly improving upon existing zeroth-order methods. Moreover, we provide the first rigorous convergence analysis for nonconvex–concave minimax problems under coupled linear constraints, thereby filling a critical theoretical gap. The proposed framework is applicable to practical domains involving coupled constraints, including resource allocation, network flow optimization, and adversarial learning.
📝 Abstract
In this paper, we study zeroth-order algorithms for nonconvex minimax problems with coupled linear constraints under the deterministic and stochastic settings, which have attracted wide attention in machine learning, signal processing and many other fields in recent years, e.g., adversarial attacks in resource allocation problems and network flow problems etc. We propose two single-loop algorithms, namely the zero-order primal-dual alternating projected gradient (ZO-PDAPG) algorithm and the zero-order regularized momentum primal-dual projected gradient algorithm (ZO-RMPDPG), for solving deterministic and stochastic nonconvex-(strongly) concave minimax problems with coupled linear constraints. The iteration complexity of the two proposed algorithms to obtain an $varepsilon$-stationary point are proved to be $mathcal{O}(varepsilon ^{-2})$ (resp. $mathcal{O}(varepsilon ^{-4})$) for solving nonconvex-strongly concave (resp. nonconvex-concave) minimax problems with coupled linear constraints under deterministic settings and $ ilde{mathcal{O}}(varepsilon ^{-3})$ (resp. $ ilde{mathcal{O}}(varepsilon ^{-6.5})$) under stochastic settings respectively. To the best of our knowledge, they are the first two zeroth-order algorithms with iterative complexity guarantees for solving nonconvex-(strongly) concave minimax problems with coupled linear constraints under the deterministic and stochastic settings.