One Model, All Roles: Multi-Turn, Multi-Agent Self-Play Reinforcement Learning for Conversational Social Intelligence

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of enabling AI to autonomously acquire complex social intelligence—such as empathy, persuasion, and compromise—in multi-turn group conversations, moving beyond static or single-turn optimization paradigms. The authors propose OMAR, a novel framework that, for the first time, allows a single model to simultaneously embody all roles in multi-agent, multi-turn dialogues through self-play reinforcement learning, thereby learning long-term objectives and social norms via dynamic interaction. OMAR introduces a hierarchical advantage estimation mechanism that integrates turn-level and token-level advantage computations, significantly enhancing training stability in extended dialogues. Experiments demonstrate that the model spontaneously exhibits fine-grained social behaviors in environments like SOTOPIA and Werewolf without human supervision, effectively collaborating even in competitive settings.

Technology Category

Application Category

📝 Abstract
This paper introduces OMAR: One Model, All Roles, a reinforcement learning framework that enables AI to develop social intelligence through multi-turn, multi-agent conversational self-play. Unlike traditional paradigms that rely on static, single-turn optimizations, OMAR allows a single model to role-play all participants in a conversation simultaneously, learning to achieve long-term goals and complex social norms directly from dynamic social interaction. To ensure training stability across long dialogues, we implement a hierarchical advantage estimation that calculates turn-level and token-level advantages. Evaluations in the SOTOPIA social environment and Werewolf strategy games show that our trained models develop fine-grained, emergent social intelligence, such as empathy, persuasion, and compromise seeking, demonstrating the effectiveness of learning collaboration even under competitive scenarios. While we identify practical challenges like reward hacking, our results show that rich social intelligence can emerge without human supervision. We hope this work incentivizes further research on AI social intelligence in group conversations.
Problem

Research questions and friction points this paper is trying to address.

social intelligence
multi-agent conversation
reinforcement learning
multi-turn dialogue
role-playing
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-agent self-play
conversational reinforcement learning
social intelligence emergence
hierarchical advantage estimation
role-playing dialogue
🔎 Similar Papers