LLMs Can't Handle Peer Pressure: Crumbling under Multi-Agent Social Interactions

📅 2025-08-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing work lacks fine-grained modeling of social dynamics in multi-agent interactions involving large language models (LLMs), particularly regarding trust formation, resistance to misinformation, and information integration among peers. Method: We propose KAIROS, the first socially grounded question-answering competition benchmark featuring controllable expert/novice roles, noisy peer groups, and adversarial agents. Our approach integrates prompt engineering, supervised fine-tuning, and group-relative policy optimization (GRPO)—a novel reinforcement learning framework incorporating multi-agent context and outcome-oriented reward shaping. Contribution/Results: Experiments show that GRPO with outcome-based rewards achieves peak performance; however, enhanced social sensitivity incurs a trade-off in robustness against adversarial perturbations. All code and data are publicly released, establishing a reproducible, paradigm-shifting benchmark for studying LLMs’ social decision-making capabilities.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly deployed in multi-agent systems (MAS) as components of collaborative intelligence, where peer interactions dynamically shape individual decision-making. Although prior work has focused on conformity bias, we extend the analysis to examine how LLMs form trust from previous impressions, resist misinformation, and integrate peer input during interaction, key factors for achieving collective intelligence under complex social dynamics. We present KAIROS, a benchmark simulating quiz contests with peer agents of varying reliability, offering fine-grained control over conditions such as expert-novice roles, noisy crowds, and adversarial peers. LLMs receive both historical interactions and current peer responses, allowing systematic investigation into how trust, peer action, and self-confidence influence decisions. As for mitigation strategies, we evaluate prompting, supervised fine-tuning, and reinforcement learning, Group Relative Policy Optimisation (GRPO), across multiple models. Our results reveal that GRPO with multi-agent context combined with outcome-based rewards and unconstrained reasoning achieves the best overall performance, but also decreases the robustness to social influence compared to Base models. The code and datasets are available at: https://github.com/declare-lab/KAIROS.
Problem

Research questions and friction points this paper is trying to address.

LLMs struggle with trust formation in multi-agent interactions
LLMs demonstrate vulnerability to misinformation from peer agents
LLMs show limited ability to integrate social input effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simulated quiz contests with varying agent reliability
Multi-agent reinforcement learning with outcome rewards
Unconstrained reasoning combined with social context
🔎 Similar Papers
No similar papers found.