Social Cooperation in Conversational AI Agents

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language model (LLM)-based dialogue agents perform well in short-term interactions but exhibit limited adaptability to persistent user corrections and evolving interpersonal dynamics in long-term human–agent collaboration, suffering from poor generalization. To address this, we propose a novel long-term collaborative framework grounded in computational modeling of human social intelligence—uniquely integrating social cognition theory with game-theoretic principles. We formalize long-term trust building and intention inference as jointly optimizable objective functions, thereby transcending conventional myopic interaction paradigms. Our approach synergistically combines game-theoretic modeling, socially inspired cognitive mechanisms, and LLM fine-tuning to enable dynamic, self-adaptive policy adjustment. Evaluated on a long-horizon collaborative benchmark, our method reduces error rate by 37% and increases user trust by 29%, significantly improving adaptability, response consistency, and cooperative sustainability in multi-turn correction tasks.

Technology Category

Application Category

📝 Abstract
The development of AI agents based on large, open-domain language models (LLMs) has paved the way for the development of general-purpose AI assistants that can support human in tasks such as writing, coding, graphic design, and scientific research. A major challenge with such agents is that, by necessity, they are trained by observing relatively short-term interactions with humans. Such models can fail to generalize to long-term interactions, for example, interactions where a user has repeatedly corrected mistakes on the part of the agent. In this work, we argue that these challenges can be overcome by explicitly modeling humans' social intelligence, that is, their ability to build and maintain long-term relationships with other agents whose behavior cannot always be predicted. By mathematically modeling the strategies humans use to communicate and reason about one another over long periods of time, we may be able to derive new game theoretic objectives against which LLMs and future AI agents may be optimized.
Problem

Research questions and friction points this paper is trying to address.

Enabling AI agents to generalize long-term human interactions
Modeling human social intelligence for better AI cooperation
Developing game-theoretic objectives for optimizing conversational AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modeling human social intelligence in AI
Using game theory for long-term interaction objectives
Optimizing LLMs with human-like communication strategies
🔎 Similar Papers
No similar papers found.
M
Mustafa Mert Celikok
Department of Intelligent Systems, Delft University of Technology
Saptarashmi Bandyopadhyay
Saptarashmi Bandyopadhyay
University of Maryland, College Park
Artificial IntelligenceIntelligent AgentsNLPMachine LearningReinforcement Learning
R
R. Loftin
School of Computer Science, University of Sheffield