🤖 AI Summary
This paper addresses symmetric cone games (SCGs), a generalized zero-sum framework, by proposing the first optimistic online learning algorithm tailored to symmetric cone strategy spaces for efficiently computing ε-saddle points. Methodologically, we extend optimistic follow-the-regularized-leader (OFTRL) to the Euclidean Jordan algebra structure and design the symmetric cone multiplicative weights update (SC-MWU) algorithm; we further establish, for the first time, the strong convexity of the symmetric cone negative entropy with respect to the trace norm. Theoretically, our approach achieves an O(1/ε) iteration complexity—surpassing prior bounds limited to simplices or matrix spaces. Practically, it unifies modeling of classical games, quantum games, and distance metric learning. Empirical validation on the Fermat–Weber problem confirms both effectiveness and generalization capability.
📝 Abstract
Optimistic online learning algorithms have led to significant advances in equilibrium computation, particularly for two-player zero-sum games, achieving an iteration complexity of $mathcal{O}(1/epsilon)$ to reach an $epsilon$-saddle point. These advances have been established in normal-form games, where strategies are simplex vectors, and quantum games, where strategies are trace-one positive semidefinite matrices. We extend optimistic learning to symmetric cone games (SCGs), a class of two-player zero-sum games where strategy spaces are generalized simplices (trace-one slices of symmetric cones). A symmetric cone is the cone of squares of a Euclidean Jordan Algebra; canonical examples include the nonnegative orthant, the second-order cone, the cone of positive semidefinite matrices, and their products, all fundamental to convex optimization. SCGs unify normal-form and quantum games and, as we show, offer significantly greater modeling flexibility, allowing us to model applications such as distance metric learning problems and the Fermat-Weber problem. To compute approximate saddle points in SCGs, we introduce the Optimistic Symmetric Cone Multiplicative Weights Update algorithm and establish an iteration complexity of $mathcal{O}(1/epsilon)$ to reach an $epsilon$-saddle point. Our analysis builds on the Optimistic Follow-the-Regularized-Leader framework, with a key technical contribution being a new proof of the strong convexity of the symmetric cone negative entropy with respect to the trace-one norm, a result that may be of independent interest.