🤖 AI Summary
This work addresses the computational challenge of efficiently solving for Nash equilibria in continuous-time, finite-state mean-field games—a setting where existing methods often struggle. The paper introduces, for the first time, the concept of regularization from discrete-time formulations into this continuous-time framework, thereby constructing a computable regularized equilibrium. It further extends fixed-point iteration and fictitious play algorithms to this setting, leveraging a continuous-time Markov chain model to significantly enhance computational tractability. Numerical experiments demonstrate the effectiveness and practical utility of the proposed algorithms in representative scenarios, confirming their potential for real-world applications in large-scale dynamic systems.
📝 Abstract
Mean field games (MFGs) offer a powerful framework for modeling large-scale multi-agent systems. This paper addresses MFGs formulated in continuous time with discrete state spaces, where agents' dynamics are governed by continuous-time Markov chains -- relevant to applications like population dynamics and queueing networks. While prior research has largely focused on theoretical aspects of continuous-time discrete-state MFGs, efficient computational methods for determining equilibria remain underdeveloped. Inspired by discrete-time approaches, we approximate the classical Nash equilibria by regularization methods, enabling more computationally tractable solution algorithms. Specifically, we define regularized equilibria for continuous-time MFGs and extend the classical fixed-point iteration and fictitious play algorithm to these equilibria. We validate the effectiveness and practicality of our approach via illustrative numerical examples.