🤖 AI Summary
This work investigates how language use and logical fallacies drive opinion dynamics in social systems—specifically polarization, radicalization, and consensus formation. We propose LODAS (Language-Optimized Discrete Argumentation System), the first integrated model coupling large language model (LLM)-based agents, natural-language argument parsing, fallacy detection, and discrete opinion dynamics. Using the “Ship of Theseus” debate as a controlled sociolinguistic testbed, we simulate multi-agent interactions under varying initial opinion distributions. Results reveal that LLM agents exhibit dual biases: high propensity to generate fallacies and heightened susceptibility to fallacious reasoning; “agreeableness” and “conformity bias” emerge as key behavioral traits. Across balanced, polarized, and imbalanced initial conditions, rapid consensus emerges—but exposes underlying logical fragility and limited persuasive efficacy. This study establishes a novel, interpretable, and scalable paradigm for joint language–cognition–society modeling in computational social science.
📝 Abstract
Understanding how opinions evolve is crucial for addressing issues such as polarization, radicalization, and consensus in social systems. While much research has focused on identifying factors influencing opinion change, the role of language and argumentative fallacies remains underexplored. This paper aims to fill this gap by investigating how language - along with social dynamics - influences opinion evolution through LODAS, a Language-Driven Opinion Dynamics Model for Agent-Based Simulations. The model simulates debates around the"Ship of Theseus"paradox, in which agents with discrete opinions interact with each other and evolve their opinions by accepting, rejecting, or ignoring the arguments presented. We study three different scenarios: balanced, polarized, and unbalanced opinion distributions. Agreeableness and sycophancy emerge as two main characteristics of LLM agents, and consensus around the presented statement emerges almost in any setting. Moreover, such AI agents are often producers of fallacious arguments in the attempt of persuading their peers and - for their complacency - they are also highly influenced by arguments built on logical fallacies. These results highlight the potential of this framework not only for simulating social dynamics but also for exploring from another perspective biases and shortcomings of LLMs, which may impact their interactions with humans.