🤖 AI Summary
This study investigates how human cooperation in the repeated prisoner’s dilemma varies when interacting with agents of different identity labels—self-identified human, rule-based AI, or large language model (LLM) agent—with particular attention to identity-related cognitive biases and gender–agent-type interaction effects. Method: Thirty participants engaged in iterative gameplay; behavioral data—including cooperation rates, response latency, and acceptance of cooperative repair attempts—were collected and analyzed using mixed-effects modeling. Contribution/Results: The study provides the first systematic evidence that ascribing a “human-like” identity to LLM agents significantly increases human cooperation, but this effect is moderated by participant gender: women exhibit greater trust in and acceptance of cooperative repair from LLM agents. Two distinct response patterns were identified—proactive altruism and repair sensitivity. These findings extend theoretical frameworks of human–AI collaboration and offer empirical grounding for designing trustworthy AI systems.
📝 Abstract
With the rise of large language models (LLMs), AI agents as autonomous decision-makers present significant opportunities and challenges for human-AI cooperation. While many studies have explored human cooperation with AI as tools, the role of LLM-augmented autonomous agents in competitive-cooperative interactions remains under-examined. This study investigates human cooperative behavior by engaging 30 participants who interacted with LLM agents exhibiting different characteristics (purported human, purported rule-based AI agent, and LLM agent) in repeated Prisoner's Dilemma games. Findings show significant differences in cooperative behavior based on the agents' purported characteristics and the interaction effect of participants' genders and purported characteristics. We also analyzed human response patterns, including game completion time, proactive favorable behavior, and acceptance of repair efforts. These insights offer a new perspective on human interactions with LLM agents in competitive cooperation contexts, such as virtual avatars or future physical entities. The study underscores the importance of understanding human biases toward AI agents and how observed behaviors can influence future human-AI cooperation dynamics.