Nash Equilibrium and Learning Dynamics in Three-Player Matching m-Action Games

📅 2024-02-16
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses theoretical gaps in understanding Nash equilibrium structure and learning dynamics in three-player matching-pennies games, where classical two-player zero-sum frameworks fail to capture essential complexities. Method: We introduce a minimal synchronous competitive model and propose a “synchronous–rotational–competitive” tripartite force decomposition framework to unify the analysis of multi-agent learning dynamics. Leveraging Follow-the-Regularized-Leader (FTRL)-based online learning algorithms, we combine game-theoretic analysis, dynamical systems modeling, and extensive numerical experiments. Contribution/Results: We provide the first complete classification of all Nash equilibria in three-player matching-pennies games. Experimentally, we demonstrate that FTRL-type algorithms exhibit non-convergent periodic behavior and novel attractor structures—distinct from two-player settings—thereby establishing foundational theoretical support and a new analytical paradigm for multi-agent learning.

Technology Category

Application Category

📝 Abstract
Learning in games discusses the processes where multiple players learn their optimal strategies through the repetition of game plays. The dynamics of learning between two players in zero-sum games, such as Matching Pennies, where their benefits are competitive, have already been well analyzed. However, it is still unexplored and challenging to analyze the dynamics of learning among three players. In this study, we formulate a minimalistic game where three players compete to match their actions with one another. Although interaction among three players diversifies and complicates the Nash equilibria, we fully analyze the equilibria. We also discuss the dynamics of learning based on some famous algorithms categorized into Follow the Regularized Leader. From both theoretical and experimental aspects, we characterize the dynamics by categorizing three-player interactions into three forces to synchronize their actions, switch their actions rotationally, and seek competition.
Problem

Research questions and friction points this paper is trying to address.

Analyzes learning dynamics in three-player matching games.
Explores Nash equilibria in complex three-player interactions.
Characterizes learning dynamics using Follow the Regularized Leader algorithms.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes three-player game Nash equilibria dynamics
Uses Follow the Regularized Leader algorithms
Categorizes interactions into three synchronization forces
🔎 Similar Papers
No similar papers found.