Synthetic Socratic Debates: Examining Persona Effects on Moral Decision and Persuasion Dynamics

📅 2025-06-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how human-like personality traits influence large language models’ (LLMs) moral reasoning and persuasive behavior in realistic ethical dilemmas. Method: We introduce the first large-scale AI-AI Socratic debate framework, encompassing 131 relational moral dilemmas. Six personality dimensions—age, gender, nationality, social class, ideology, and personality trait—are systematically parameterized. The framework integrates multi-agent debate simulation, logit-based confidence analysis, and quantitative measurement of emotional and credibility-oriented rhetorical strength. Contribution/Results: We provide the first empirical evidence that personality dimensions differentially shape AI moral stance formation and debate outcomes: ideology and personality trait predominantly govern persuasive efficacy, with liberal and open-minded agents achieving higher consensus rates. Initial positions, final conclusions, and persuasion success are all significantly personality-dependent. Moreover, debates exhibit a progressive rationalization trend—increasing logical confidence and decreasing emotional/credibility-based rhetoric over time.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) are increasingly used in morally sensitive domains, it is crucial to understand how persona traits affect their moral reasoning and persuasive behavior. We present the first large-scale study of multi-dimensional persona effects in AI-AI debates over real-world moral dilemmas. Using a 6-dimensional persona space (age, gender, country, class, ideology, and personality), we simulate structured debates between AI agents over 131 relationship-based cases. Our results show that personas affect initial moral stances and debate outcomes, with political ideology and personality traits exerting the strongest influence. Persuasive success varies across traits, with liberal and open personalities reaching higher consensus and win rates. While logit-based confidence grows during debates, emotional and credibility-based appeals diminish, indicating more tempered argumentation over time. These trends mirror findings from psychology and cultural studies, reinforcing the need for persona-aware evaluation frameworks for AI moral reasoning.
Problem

Research questions and friction points this paper is trying to address.

Examining how persona traits affect AI moral reasoning and persuasion
Analyzing multi-dimensional persona effects in AI-AI moral debates
Assessing persuasive success variations across different persona traits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-dimensional persona space for AI debates
Simulated structured debates on moral dilemmas
Persona-aware evaluation frameworks for AI