🤖 AI Summary
This work addresses monotone near-zero-sum games—a newly introduced class of games—bridging a theoretical gap in gradient complexity between monotone zero-sum and monotone general-sum games. Methodologically, it decomposes the original problem into a sequence of monotone zero-sum subproblems and solves them via a gradient-based sequential convex–concave minimax algorithm. Contributions include: (1) a formal definition of the near-zero-sum structure, subsuming zero-sum games as a special case; (2) a decomposition strategy that substantially reduces gradient complexity, relaxing the restrictive zero-sum assumption; and (3) enhanced scalability and practicality across real-world applications—including resource allocation and adversarial learning—while preserving theoretical rigor. Experiments demonstrate the proposed method’s superiority in both convergence speed and solution quality compared to existing approaches.
📝 Abstract
Zero-sum and non-zero-sum (aka general-sum) games are relevant in a wide range of applications. While general non-zero-sum games are computationally hard, researchers focus on the special class of monotone games for gradient-based algorithms. However, there is a substantial gap between the gradient complexity of monotone zero-sum and monotone general-sum games. Moreover, in many practical scenarios of games the zero-sum assumption needs to be relaxed. To address these issues, we define a new intermediate class of monotone near-zero-sum games that contains monotone zero-sum games as a special case. Then, we present a novel algorithm that transforms the near-zero-sum games into a sequence of zero-sum subproblems, improving the gradient-based complexity for the class. Finally, we demonstrate the applicability of this new class to model practical scenarios of games motivated from the literature.