🤖 AI Summary
This paper investigates the feasibility of AI agents improving social welfare in stochastic games by simulating opponents’ mixed strategies at a fixed computational cost. Method: We employ game-theoretic modeling, Nash equilibrium analysis, NP-hardness proofs, and extensive simulations to systematically characterize the impact boundaries of mixed-strategy simulation on equilibrium structure and social welfare. Contribution/Results: (1) Simulation is ineffective in most response-based games, and determining its equilibrium impact is NP-hard; (2) In three critical settings—adjustable trust levels, trust–coordination coupling, and privacy-preserving interaction—simulation strictly enables Pareto improvements; (3) We derive formal theoretical criteria that provide a rigorous foundation for designing trustworthy AI interaction mechanisms. This work constitutes the first systematic analysis of how strategic simulation affects equilibrium outcomes and welfare in multi-agent stochastic games, bridging theoretical guarantees with practical AI design principles.
📝 Abstract
AI agents will be predictable in certain ways that traditional agents are not. Where and how can we leverage this predictability in order to improve social welfare? We study this question in a game-theoretic setting where one agent can pay a fixed cost to simulate the other in order to learn its mixed strategy. As a negative result, we prove that, in contrast to prior work on pure-strategy simulation, enabling mixed-strategy simulation may no longer lead to improved outcomes for both players in all so-called"generalised trust games". In fact, mixed-strategy simulation does not help in any game where the simulatee's action can depend on that of the simulator. We also show that, in general, deciding whether simulation introduces Pareto-improving Nash equilibria in a given game is NP-hard. As positive results, we establish that mixed-strategy simulation can improve social welfare if the simulator has the option to scale their level of trust, if the players face challenges with both trust and coordination, or if maintaining some level of privacy is essential for enabling cooperation.