🤖 AI Summary
Clarifying the conceptual distinction between trust and cooperation, this study investigates trust’s evolutionary role in mitigating monitoring costs within repeated symmetric social dilemmas. Method: We formalize trust as a cognitive heuristic—“cease monitoring after observing sufficient cooperation”—and analyze its dynamics via game-theoretic modeling and evolutionary simulations across canonical dilemmas (e.g., Prisoner’s Dilemma, Snowdrift Game). Contribution/Results: When monitoring incurs costs, this trust heuristic significantly outperforms classical reciprocal strategies (e.g., Tit-for-Tat) by reducing erroneous punishment and exploitation risk, thereby sustaining higher population-level cooperation. The findings uncover a mechanism by which trust fosters cooperative evolution under uncertainty, offering a novel theoretical framework for understanding the endogenous emergence of social norms.
📝 Abstract
Trust is often thought to increase cooperation. However, game-theoretic models often fail to distinguish between cooperative behaviour and trust. This makes it difficult to measure trust and determine its effect in different social dilemmas. We address this here by formalising trust as a cognitive shortcut in repeated games. This functions by avoiding checking a partner's actions once a threshold level of cooperativeness has been observed. We consider trust-based strategies that implement this heuristic, and systematically analyse their evolution across the space of two-player symmetric social dilemma games. We find that where it is costly to check whether another agent's actions were cooperative, as is the case in many real-world settings, then trust-based strategies can outcompete standard reciprocal strategies such as Tit-for-Tat in many social dilemmas. Moreover, the presence of trust increases the overall level of cooperation in the population, especially in cases where agents can make unintentional errors in their actions. This occurs even in the presence of strategies designed to build and then exploit trust. Overall, our results demonstrate the individual adaptive benefit to an agent of using a trust heuristic, and provide a formal theory for how trust can promote cooperation in different types of social interaction. We discuss the implications of this for interactions between humans and artificial intelligence agents.