🤖 AI Summary
This study investigates whether persona assignment in LLM-based multi-agent systems induces social bias, focusing on inter-agent differences in trustworthiness (likelihood of opinion acceptance) and persistence (consistency in stance assertion). Through controlled experiments across collaborative problem-solving and persuasion tasks, we systematically manipulate agent personas—including occupation, gender, and socioeconomic status—while varying LLM backbones, group sizes, and interaction rounds. Our key finding is that historically advantaged demographic personas exhibit significantly *lower* trustworthiness and persistence—and display robust in-group favoritism—contrary to real-world social expectations. This counterintuitive bias is consistent across diverse LLMs, scalable group configurations, and iterative interaction phases, demonstrating strong cross-model, cross-scale, and cross-round robustness. The results reveal that LLM agents may amplify, rather than mitigate, societal biases during social simulation, underscoring the urgent need for principled persona bias detection and mitigation frameworks in multi-agent AI systems.
📝 Abstract
Large Language Model (LLM)-based multi-agent systems are increasingly used to simulate human interactions and solve collaborative tasks. A common practice is to assign agents with personas to encourage behavioral diversity. However, this raises a critical yet underexplored question: do personas introduce biases into multi-agent interactions? This paper presents a systematic investigation into persona-induced biases in multi-agent interactions, with a focus on social traits like trustworthiness (how an agent's opinion is received by others) and insistence (how strongly an agent advocates for its opinion). Through a series of controlled experiments in collaborative problem-solving and persuasion tasks, we reveal that (1) LLM-based agents exhibit biases in both trustworthiness and insistence, with personas from historically advantaged groups (e.g., men and White individuals) perceived as less trustworthy and demonstrating less insistence; and (2) agents exhibit significant in-group favoritism, showing a higher tendency to conform to others who share the same persona. These biases persist across various LLMs, group sizes, and numbers of interaction rounds, highlighting an urgent need for awareness and mitigation to ensure the fairness and reliability of multi-agent systems.