🤖 AI Summary
This study investigates the variability and stability of moral judgments made by large language models (LLMs) in role-playing contexts. To quantify these properties, we introduce two novel metrics: *moral susceptibility*—measuring sensitivity to role-based prompts—and *moral robustness*—assessing consistency of moral judgments across roles. We are the first to adapt the Moral Foundations Questionnaire (MFQ) for fine-grained, quantitative evaluation of LLMs’ moral reasoning capabilities, establishing a benchmark for both cross-role and within-role moral score variability. Through systematic experiments across multiple model families (Claude, Gemini, GPT-4), parameter scales, and role prompts, we find that model family is the dominant factor governing moral robustness (with Claude exhibiting highest robustness), while larger model scale positively correlates with moral susceptibility. Crucially, moral susceptibility and robustness exhibit a significant positive correlation. This work provides a reproducible, granular, and quantitatively grounded framework for evaluating moral alignment in LLMs.
📝 Abstract
Large language models (LLMs) increasingly operate in social contexts, motivating analysis of how they express and shift moral judgments. In this work, we investigate the moral response of LLMs to persona role-play, prompting a LLM to assume a specific character. Using the Moral Foundations Questionnaire (MFQ), we introduce a benchmark that quantifies two properties: moral susceptibility and moral robustness, defined from the variability of MFQ scores across and within personas, respectively. We find that, for moral robustness, model family accounts for most of the variance, while model size shows no systematic effect. The Claude family is, by a significant margin, the most robust, followed by Gemini and GPT-4 models, with other families exhibiting lower robustness. In contrast, moral susceptibility exhibits a mild family effect but a clear within-family size effect, with larger variants being more susceptible. Moreover, robustness and susceptibility are positively correlated, an association that is more pronounced at the family level. Additionally, we present moral foundation profiles for models without persona role-play and for personas averaged across models. Together, these analyses provide a systematic view of how persona conditioning shapes moral behavior in large language models.