🤖 AI Summary
This paper investigates the fundamental limits on learning rates for rational agents in social networks who sequentially learn from private signals and neighbors’ historical actions. Using tools from game theory, Bayesian learning, and asymptotic statistical analysis, we develop a general model of information diffusion over networks. We establish, for the first time, a universal upper bound on the slowest agent’s learning rate—valid across all equilibria and independent of network size, topology, and individual strategies. This bound arises from an intrinsic trade-off between optimal action selection and information revelation, rather than from strategic interaction per se. Our results unify and extend classical social learning theory, and provide the first non-asymptotic characterization of performance limits for distributed learning in networked environments.
📝 Abstract
We consider long-lived agents who interact repeatedly in a social network. In each period, each agent learns about an unknown state by observing a private signal and her neighbors' actions in the previous period before taking an action herself. Our main result shows that the learning rate of the slowest learning agent is bounded from above independently of the number of agents, the network structure, and the agents' strategies. Applying this result to equilibrium learning with rational agents shows that the learning rate of all agents in any equilibrium is bounded under general conditions. This extends recent findings on equilibrium learning and demonstrates that the limitation stems from an inherent tradeoff between optimal action choices and information revelation rather than strategic considerations.