🤖 AI Summary
In decision-dependent games, joint player actions dynamically shift the data distribution, rendering conventional “performative stable equilibrium” (PSE) solutions reliant on the hard-to-verify β-smoothness assumption—i.e., Lipschitz continuity of loss gradients with respect to the distribution. This work introduces a prior-free gradient sensitivity metric that directly quantifies how distributional shifts induced by decisions affect gradient behavior. For the first time, it establishes convergence guarantees to an implementable stable equilibrium under arbitrary distribution mappings. Leveraging this metric and a strong monotonicity assumption, we design a sensitivity-guided iterative retraining algorithm that dynamically recalibrates players’ loss functions. Experiments across canonical settings—including predictive loss minimization, Cournot competition, and revenue maximization—demonstrate that our method significantly reduces final loss and accelerates convergence compared to state-of-the-art approaches.
📝 Abstract
In decision-dependent games, multiple players optimize their decisions under a data distribution that shifts with their joint actions, creating complex dynamics in applications like market pricing. A practical consequence of these dynamics is the extit{performatively stable equilibrium}, where each player's strategy is a best response under the induced distribution. Prior work relies on $β$-smoothness, assuming Lipschitz continuity of loss function gradients with respect to the data distribution, which is impractical as the data distribution maps, i.e., the relationship between joint decision and the resulting distribution shifts, are typically unknown, rendering $β$ unobtainable. To overcome this limitation, we propose a gradient-based sensitivity measure that directly quantifies the impact of decision-induced distribution shifts. Leveraging this measure, we derive convergence guarantees for performatively stable equilibria under a practically feasible assumption of strong monotonicity. Accordingly, we develop a sensitivity-informed repeated retraining algorithm that adjusts players' loss functions based on the sensitivity measure, guaranteeing convergence to performatively stable equilibria for arbitrary data distribution maps. Experiments on prediction error minimization game, Cournot competition, and revenue maximization game show that our approach outperforms state-of-the-art baselines, achieving lower losses and faster convergence.