The evolution of trust as a cognitive shortcut in repeated interactions

📅 2025-09-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Clarifying the conceptual distinction between trust and cooperation, this study investigates trust’s evolutionary role in mitigating monitoring costs within repeated symmetric social dilemmas. Method: We formalize trust as a cognitive heuristic—“cease monitoring after observing sufficient cooperation”—and analyze its dynamics via game-theoretic modeling and evolutionary simulations across canonical dilemmas (e.g., Prisoner’s Dilemma, Snowdrift Game). Contribution/Results: When monitoring incurs costs, this trust heuristic significantly outperforms classical reciprocal strategies (e.g., Tit-for-Tat) by reducing erroneous punishment and exploitation risk, thereby sustaining higher population-level cooperation. The findings uncover a mechanism by which trust fosters cooperative evolution under uncertainty, offering a novel theoretical framework for understanding the endogenous emergence of social norms.

Technology Category

Application Category

📝 Abstract
Trust is often thought to increase cooperation. However, game-theoretic models often fail to distinguish between cooperative behaviour and trust. This makes it difficult to measure trust and determine its effect in different social dilemmas. We address this here by formalising trust as a cognitive shortcut in repeated games. This functions by avoiding checking a partner's actions once a threshold level of cooperativeness has been observed. We consider trust-based strategies that implement this heuristic, and systematically analyse their evolution across the space of two-player symmetric social dilemma games. We find that where it is costly to check whether another agent's actions were cooperative, as is the case in many real-world settings, then trust-based strategies can outcompete standard reciprocal strategies such as Tit-for-Tat in many social dilemmas. Moreover, the presence of trust increases the overall level of cooperation in the population, especially in cases where agents can make unintentional errors in their actions. This occurs even in the presence of strategies designed to build and then exploit trust. Overall, our results demonstrate the individual adaptive benefit to an agent of using a trust heuristic, and provide a formal theory for how trust can promote cooperation in different types of social interaction. We discuss the implications of this for interactions between humans and artificial intelligence agents.
Problem

Research questions and friction points this paper is trying to address.

Distinguishing trust from cooperation in game theory models
Formalizing trust as a cognitive shortcut in repeated games
Analyzing evolution of trust-based strategies in social dilemmas
Innovation

Methods, ideas, or system contributions that make the work stand out.

Formalizing trust as a cognitive shortcut
Implementing trust-based heuristic strategies
Analyzing evolution in social dilemma games
🔎 Similar Papers
No similar papers found.
C
Cedric Perret
Department of Economics, University of Lausanne
The Anh Han
The Anh Han
Professor of Computer Science, Teesside University
Evolutionary Game TheoryArtificial IntelligenceEvolution of CooperationMulti-agent Systems
E
Elias Fernández Domingos
Machine Learning Group, Université libre de Bruxelles and AI Lab, Vrije Universiteit Brussel
T
Theodor Cimpeanu
Division of Biological and Environmental Sciences, University of Stirling
Simon T. Powers
Simon T. Powers
Division of Computing Science and Mathematics, University of Stirling
Multi-Agent SystemsSocio-Technical SystemsInstitutionsTrustGame Theory