Moral Susceptibility and Robustness under Persona Role-Play in Large Language Models

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the variability and stability of moral judgments made by large language models (LLMs) in role-playing contexts. To quantify these properties, we introduce two novel metrics: *moral susceptibility*—measuring sensitivity to role-based prompts—and *moral robustness*—assessing consistency of moral judgments across roles. We are the first to adapt the Moral Foundations Questionnaire (MFQ) for fine-grained, quantitative evaluation of LLMs’ moral reasoning capabilities, establishing a benchmark for both cross-role and within-role moral score variability. Through systematic experiments across multiple model families (Claude, Gemini, GPT-4), parameter scales, and role prompts, we find that model family is the dominant factor governing moral robustness (with Claude exhibiting highest robustness), while larger model scale positively correlates with moral susceptibility. Crucially, moral susceptibility and robustness exhibit a significant positive correlation. This work provides a reproducible, granular, and quantitatively grounded framework for evaluating moral alignment in LLMs.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) increasingly operate in social contexts, motivating analysis of how they express and shift moral judgments. In this work, we investigate the moral response of LLMs to persona role-play, prompting a LLM to assume a specific character. Using the Moral Foundations Questionnaire (MFQ), we introduce a benchmark that quantifies two properties: moral susceptibility and moral robustness, defined from the variability of MFQ scores across and within personas, respectively. We find that, for moral robustness, model family accounts for most of the variance, while model size shows no systematic effect. The Claude family is, by a significant margin, the most robust, followed by Gemini and GPT-4 models, with other families exhibiting lower robustness. In contrast, moral susceptibility exhibits a mild family effect but a clear within-family size effect, with larger variants being more susceptible. Moreover, robustness and susceptibility are positively correlated, an association that is more pronounced at the family level. Additionally, we present moral foundation profiles for models without persona role-play and for personas averaged across models. Together, these analyses provide a systematic view of how persona conditioning shapes moral behavior in large language models.
Problem

Research questions and friction points this paper is trying to address.

Analyzes how persona role-play affects moral judgments in large language models
Quantifies moral susceptibility and robustness using Moral Foundations Questionnaire benchmarks
Examines systematic variations in moral behavior across different model families and sizes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark quantifies moral susceptibility and robustness
Model family determines robustness, size affects susceptibility
Claude models show highest robustness in persona testing
🔎 Similar Papers
No similar papers found.
D
Davi Bastos Costa
University of São Paulo
F
Felippe Alves
University of São Paulo
Renato Vicente
Renato Vicente
University of São Paulo
Information TheoryMachine LearningComplex SystemsEvolutionary DynamicsComputational Finance