Experimental Exploration: Investigating Cooperative Interaction Behavior Between Humans and Large Language Model Agents

📅 2025-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how human cooperation in the repeated prisoner’s dilemma varies when interacting with agents of different identity labels—self-identified human, rule-based AI, or large language model (LLM) agent—with particular attention to identity-related cognitive biases and gender–agent-type interaction effects. Method: Thirty participants engaged in iterative gameplay; behavioral data—including cooperation rates, response latency, and acceptance of cooperative repair attempts—were collected and analyzed using mixed-effects modeling. Contribution/Results: The study provides the first systematic evidence that ascribing a “human-like” identity to LLM agents significantly increases human cooperation, but this effect is moderated by participant gender: women exhibit greater trust in and acceptance of cooperative repair from LLM agents. Two distinct response patterns were identified—proactive altruism and repair sensitivity. These findings extend theoretical frameworks of human–AI collaboration and offer empirical grounding for designing trustworthy AI systems.

Technology Category

Application Category

📝 Abstract
With the rise of large language models (LLMs), AI agents as autonomous decision-makers present significant opportunities and challenges for human-AI cooperation. While many studies have explored human cooperation with AI as tools, the role of LLM-augmented autonomous agents in competitive-cooperative interactions remains under-examined. This study investigates human cooperative behavior by engaging 30 participants who interacted with LLM agents exhibiting different characteristics (purported human, purported rule-based AI agent, and LLM agent) in repeated Prisoner's Dilemma games. Findings show significant differences in cooperative behavior based on the agents' purported characteristics and the interaction effect of participants' genders and purported characteristics. We also analyzed human response patterns, including game completion time, proactive favorable behavior, and acceptance of repair efforts. These insights offer a new perspective on human interactions with LLM agents in competitive cooperation contexts, such as virtual avatars or future physical entities. The study underscores the importance of understanding human biases toward AI agents and how observed behaviors can influence future human-AI cooperation dynamics.
Problem

Research questions and friction points this paper is trying to address.

Explores human cooperation with LLM agents in competitive-cooperative interactions.
Investigates how agent characteristics and participant gender affect cooperative behavior.
Analyzes human response patterns in repeated Prisoner's Dilemma games.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explored human-LLM agent cooperative behavior
Used Prisoner's Dilemma with varied agent types
Analyzed gender and agent characteristic impacts
🔎 Similar Papers
No similar papers found.