🤖 AI Summary
This study addresses how agents with limited cognitive resources can acquire flexible environmental knowledge through social interaction without relying on complex mentalizing. Using reinforcement learning simulations in reconfigurable environments, the authors model a form of social learning in which learners observe expert behavior without inferring underlying mental states. The proposed architecture integrates heuristic action imitation with a mechanism that enhances value-based representations. Results demonstrate that learners rapidly develop high-level representations comparable to those of experts. These findings suggest that minimal social cues suffice to support model-based cultural transmission of knowledge, challenging the prevailing view that theory of mind or explicit mental state inference is essential for such learning.
📝 Abstract
How do people acquire rich, flexible knowledge about their environment from others despite limited cognitive capacity? Humans are often thought to rely on computationally costly mentalizing, such as inferring others' beliefs. In contrast, cultural evolution emphasizes that behavioral transmission can be supported by simple social cues. Using reinforcement learning simulations, we show how minimal social learning can indirectly transmit higher-level representations. We simulate a naïve agent searching for rewards in a reconfigurable environment, learning either alone or by observing an expert - crucially, without inferring mental states. Instead, the learner heuristically selects actions or boosts value representations based on observed actions. Our results demonstrate that these cues bias the learner's experience, causing its representation to converge toward the expert's. Model-based learners benefit most from social exposure, showing faster learning and more expert-like representations. These findings show how cultural transmission can arise from simple, non-mentalizing processes exploiting asocial learning mechanisms.