🤖 AI Summary
This study investigates the impact of AI-aggregated outputs as training data on social learning and knowledge evolution. Extending the DeGroot opinion dynamics model, the authors introduce an AI aggregator that learns collective beliefs and feeds back a synthesized signal, thereby constructing a dynamic system grounded in social learning theory. The work innovatively proposes a “learning gap” metric and uncovers a critical threshold in the aggregator’s update rate: when a global aggregator updates too rapidly, it fails to robustly improve learning performance and may even degrade outcomes along at least one dimension. In contrast, a local aggregation architecture consistently enhances learning across all environments. These findings suggest that local aggregation constitutes a more reliable mechanism for knowledge integration.
📝 Abstract
Artificial intelligence (AI) changes social learning when aggregated outputs become training data for future predictions. To study this, we extend the DeGroot model by introducing an AI aggregator that trains on population beliefs and feeds synthesized signals back to agents. We define the learning gap as the deviation of long-run beliefs from the efficient benchmark, allowing us to capture how AI aggregation affects learning. Our main result identifies a threshold in the speed of updating: when the aggregator updates too quickly, there is no positive-measure set of training weights that robustly improves learning across a broad class of environments, whereas such weights exist when updating is sufficiently slow. We then compare global and local architectures. Local aggregators trained on proximate or topic-specific data robustly improve learning in all environments. Consequently, replacing specialized local aggregators with a single global aggregator worsens learning in at least one dimension of the state.