🤖 AI Summary
This work addresses the suboptimality in linear-quadratic network games arising from agents’ misspecified subjective models—such as constant or mean-field conjectures—which lead equilibrium outcomes astray from optimality. To correct this, the paper introduces a “cognitive arbitrage” mechanism that subtly reshapes agents’ beliefs by fine-tuning their observed information, without altering incentive structures. Leveraging the Berk–Nash equilibrium to characterize long-run behavior, the authors formalize “misspecification value” to quantify cognitive bias and formulate a Stackelberg optimization framework to derive the optimal intervention policy. Theoretically, they establish the existence of a closed-form solution, and propose a two-timescale learning algorithm proven to converge to the optimal Berk–Nash equilibrium, thereby offering a novel paradigm for behavioral regulation in boundedly rational networked systems.
📝 Abstract
We study strategic interaction in linear-quadratic network games where agents act on subjective, misspecified models of their environment. Agents observe noisy aggregate signals generated by local network externalities and interpret them through simplified conjectures, such as constant or mean-field representations. We characterize the long-run behavior using the Berk-Nash equilibrium (BNE) concept, establishing conditions under which BNE diverges from the Nash equilibrium of the perfectly specified game. We quantify this divergence using a Value of Misspecification (VoM) metric. Building on this framework, we introduce "cognitive arbitrage" -- a design paradigm where a system designer strategically shapes agents' conjectures via minimal observation distortions to steer equilibrium outcomes. We formulate the cognitive arbitrage problem as a Stackelberg optimization with closed-form solutions and prove the convergence of a two-time-scale learning algorithm to the optimal BNE. Our results provide a principled framework for influencing behavior in networked systems with bounded rationality, offering a new perspective on mechanism design that operates on agents' representations rather than their incentives.