One Model, Many Morals: Uncovering Cross-Linguistic Misalignments in Computational Moral Reasoning

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs), pretrained predominantly on English data, exhibit systematic biases and inconsistencies in moral judgment across multilingual and multicultural contexts. Method: We introduce the first cross-cultural moral reasoning benchmark covering five languages, employing zero-shot evaluation to uncover language-specific judgment disparities; we propose a structured taxonomy of moral reasoning errors and integrate pretraining data provenance analysis to identify cultural misalignment as the primary source of bias. Contribution/Results: This work provides the first empirical evidence that language is not merely an input modality but a constitutive factor shaping moral cognition. Our findings establish a methodological foundation and empirical basis for culturally aware AI design, advancing moral alignment research from monolingual toward genuinely multicultural paradigms. The benchmark, error taxonomy, and causal analysis collectively enable rigorous, cross-linguistic evaluation of ethical reasoning in LLMs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly deployed in multilingual and multicultural environments where moral reasoning is essential for generating ethically appropriate responses. Yet, the dominant pretraining of LLMs on English-language data raises critical concerns about their ability to generalize judgments across diverse linguistic and cultural contexts. In this work, we systematically investigate how language mediates moral decision-making in LLMs. We translate two established moral reasoning benchmarks into five culturally and typologically diverse languages, enabling multilingual zero-shot evaluation. Our analysis reveals significant inconsistencies in LLMs' moral judgments across languages, often reflecting cultural misalignment. Through a combination of carefully constructed research questions, we uncover the underlying drivers of these disparities, ranging from disagreements to reasoning strategies employed by LLMs. Finally, through a case study, we link the role of pretraining data in shaping an LLM's moral compass. Through this work, we distill our insights into a structured typology of moral reasoning errors that calls for more culturally-aware AI.
Problem

Research questions and friction points this paper is trying to address.

Uncovering moral judgment inconsistencies across languages in LLMs
Investigating cultural misalignment in multilingual moral reasoning
Analyzing pretraining data's role in shaping AI moral compass
Innovation

Methods, ideas, or system contributions that make the work stand out.

Translated moral benchmarks into diverse languages
Uncovered moral judgment inconsistencies across languages
Linked pretraining data to moral reasoning errors
🔎 Similar Papers
No similar papers found.