Untangling Input Language from Reasoning Language: A Diagnostic Framework for Cross-Lingual Moral Alignment in LLMs

📅 2026-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether moral judgment discrepancies exhibited by large language models in multilingual settings stem from the input language or the reasoning language. To disentangle these influences, the authors propose the first diagnostic framework that combines cross-lingual moral dilemma experiments, Moral Foundations Theory analysis, and variance decomposition, systematically evaluating 13 mainstream models. The findings reveal that the reasoning language contributes approximately twice as much as the input language to judgment variation; nearly half of the models display context-dependent behaviors undetectable by standard evaluation protocols; and, based on empirical evidence, the "Authority" moral foundation is refined into two distinct sub-dimensions—familial and institutional—enabling a more nuanced, deployment-oriented taxonomy for moral judgment.

Technology Category

Application Category

📝 Abstract
When LLMs judge moral dilemmas, do they reach different conclusions in different languages, and if so, why? Two factors could drive such differences: the language of the dilemma itself, or the language in which the model reasons. Standard evaluation conflates these by testing only matched conditions (e.g., English dilemma with English reasoning). We introduce a methodology that separately manipulates each factor, covering also mismatched conditions (e.g., English dilemma with Chinese reasoning), enabling decomposition of their contributions. To study \emph{what} changes, we propose an approach to interpret the moral judgments in terms of Moral Foundations Theory. As a side result, we identify evidence for splitting the Authority dimension into a family-related and an institutional dimension. Applying this methodology to English-Chinese moral judgment with 13 LLMs, we demonstrate its diagnostic power: (1) the framework isolates reasoning-language effects as contributing twice the variance of input-language effects; (2) it detects context-dependency in nearly half of models that standard evaluation misses; and (3) a diagnostic taxonomy translates these patterns into deployment guidance. We release our code and datasets at https://anonymous.4open.science/r/CrossCulturalMoralJudgement.
Problem

Research questions and friction points this paper is trying to address.

cross-lingual
moral alignment
language model
moral judgment
reasoning language
Innovation

Methods, ideas, or system contributions that make the work stand out.

cross-lingual moral alignment
reasoning language
input language
Moral Foundations Theory
diagnostic framework
🔎 Similar Papers
No similar papers found.