🤖 AI Summary
This study addresses the lack of culturally adapted evaluation benchmarks for moral alignment in French large language models. We introduce FR-MoralStories, the first high-quality French moral reasoning dataset, covering authentic societal scenarios—including tipping, intimate relationships, and animal welfare. Methodologically, we propose a multi-stage cultural adaptation pipeline: native-speaker–led translation from Moral Stories, localized value annotation, and collaborative revision, followed by cross-lingual alignment evaluation using mT5 and XLM-R. Our contributions are fourfold: (1) establishing the first dedicated French moral evaluation benchmark; (2) enabling the first fine-grained, culture-sensitive value annotation for French; (3) empirically demonstrating that mainstream multilingual models possess baseline moral reasoning capability in French but remain vulnerable to preference fine-tuning interference; and (4) releasing a fully open-source, reproducible evaluation framework, substantially enhancing reliability and cross-study comparability in French moral reasoning assessment.
📝 Abstract
Aligning language models with human values is crucial, especially as they become more integrated into everyday life. While models are often adapted to user preferences, it is equally important to ensure they align with moral norms and behaviours in real-world social situations. Despite significant progress in languages like English and Chinese, French has seen little attention in this area, leaving a gap in understanding how LLMs handle moral reasoning in this language. To address this gap, we introduce Histoires Morales, a French dataset derived from Moral Stories, created through translation and subsequently refined with the assistance of native speakers to guarantee grammatical accuracy and adaptation to the French cultural context. We also rely on annotations of the moral values within the dataset to ensure their alignment with French norms. Histoires Morales covers a wide range of social situations, including differences in tipping practices, expressions of honesty in relationships, and responsibilities toward animals. To foster future research, we also conduct preliminary experiments on the alignment of multilingual models on French and English data and the robustness of the alignment. We find that while LLMs are generally aligned with human moral norms by default, they can be easily influenced with user-preference optimization for both moral and immoral data.