🤖 AI Summary
This study identifies pervasive in-group–out-group bias—rooted in social identity theory—in large language models (LLMs), manifesting as preferential treatment of in-groups and negative stereotyping of out-groups, both under default and role-conditioned settings. Method: Through cross-architecture evaluation (e.g., GPT-4.1, DeepSeek-3.1), we integrate sentiment trajectory analysis, embedding regression, and persona-based prompting to probe how LLMs internalize and activate value orientations and cognitive styles associated with assigned identities. We propose Identity-Oriented Normalization (ION), a novel intervention combining supervised fine-tuning and direct preference optimization (DPO). Contribution/Results: Persona prompting induces significant semantic clustering in model representations; ION reduces in-group/out-group sentiment divergence by 69% while maintaining robustness across architectures. This work provides the first empirical evidence that LLMs dynamically encode and express socially embedded identity biases—and demonstrates their mitigability via identity-aware alignment.
📝 Abstract
This study investigates ``us versus them''bias, as described by Social Identity Theory, in large language models (LLMs) under both default and persona-conditioned settings across multiple architectures (GPT-4.1, DeepSeek-3.1, Gemma-2.0, Grok-3.0, and LLaMA-3.1). Using sentiment dynamics, allotaxonometry, and embedding regression, we find consistent ingroup-positive and outgroup-negative associations across foundational LLMs. We find that adopting a persona systematically alters models'evaluative and affiliative language patterns. For the exemplar personas examined, conservative personas exhibit greater outgroup hostility, whereas liberal personas display stronger ingroup solidarity. Persona conditioning produces distinct clustering in embedding space and measurable semantic divergence, supporting the view that even abstract identity cues can shift models'linguistic behavior. Furthermore, outgroup-targeted prompts increased hostility bias by 1.19--21.76% across models. These findings suggest that LLMs learn not only factual associations about social groups but also internalize and reproduce distinct ways of being, including attitudes, worldviews, and cognitive styles that are activated when enacting personas. We interpret these results as evidence of a multi-scale coupling between local context (e.g., the persona prompt), localizable representations (what the model ``knows''), and global cognitive tendencies (how it ``thinks''), which are at least reflected in the training data. Finally, we demonstrate ION, an ``us versus them''bias mitigation approach using fine-tuning and direct preference optimization (DPO), which reduces sentiment divergence by up to 69%, highlighting the potential for targeted mitigation strategies in future LLM development.