Realistic threat perception drives intergroup conflict: A causal, dynamic analysis using generative-agent simulations

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Empirical research on intergroup conflict faces persistent challenges in causal inference, ethical constraints on experimental manipulation, and scarcity of dynamic, multimodal data on threat perceptions. Method: We develop a large language model–driven generative agent–based virtual society to isolate and precisely manipulate realistic (material) and symbolic (identity-based) threats while dynamically tracking behavioral, linguistic, and attitudinal responses across time. Contribution/Results: We establish, for the first time, the direct causal primacy of realistic threat in driving hostile intergroup behavior. Symbolic threat exerts significant effects only in the absence of realistic threat and operates exclusively through ingroup bias mediation. Neutral cross-group contact buffers hostility, whereas structural inequality exacerbates majority-group animosity. Our framework yields an interpretable causal model that identifies structural fairness interventions and non-hostile intergroup contact as critical leverage points—providing a foundation for evidence-based conflict prevention policy modeling.

Technology Category

Application Category

📝 Abstract
Human conflict is often attributed to threats against material conditions and symbolic values, yet it remains unclear how they interact and which dominates. Progress is limited by weak causal control, ethical constraints, and scarce temporal data. We address these barriers using simulations of large language model (LLM)-driven agents in virtual societies, independently varying realistic and symbolic threat while tracking actions, language, and attitudes. Representational analyses show that the underlying LLM encodes realistic threat, symbolic threat, and hostility as distinct internal states, that our manipulations map onto them, and that steering these states causally shifts behavior. Our simulations provide a causal account of threat-driven conflict over time: realistic threat directly increases hostility, whereas symbolic threat effects are weaker, fully mediated by ingroup bias, and increase hostility only when realistic threat is absent. Non-hostile intergroup contact buffers escalation, and structural asymmetries concentrate hostility among majority groups.
Problem

Research questions and friction points this paper is trying to address.

Investigates how realistic and symbolic threats drive intergroup conflict dynamics
Uses LLM agent simulations to overcome ethical and causal analysis limitations
Tests causal effects of threats on hostility, bias, and conflict escalation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-driven agent simulations model threat dynamics
Manipulate realistic and symbolic threats independently
Track internal states, actions, and language causally
Suhaib Abdurahman
Suhaib Abdurahman
University of Southern California
Computational Social ScienceSocial PsychologyGenerative AIMachine Learning
Farzan Karimi-Malekabadi
Farzan Karimi-Malekabadi
University of Southern California
MoralityCultureLarge Language Models
C
Chenxiao Yu
Department of Computer Science, University of Southern California
N
Nour S. Kteily
Kellogg School of Management, Northwestern University
M
Morteza Dehghani
Department of Psychology, University of Southern California