How Inclusively do LMs Perceive Social and Moral Norms?

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates language models’ (LMs’) perceptual inclusivity regarding pluralistic societal moral norms, focusing on value alignment across demographic dimensions—gender, age, and income. Method: We construct a rule-based prompting evaluation framework, comparing responses from 11 LMs against judgments from 100 human annotators. We propose Absolute Distance Alignment (ADA-Met), the first metric specifically designed to quantify alignment for ordinal-value judgments. Contribution/Results: Empirical analysis reveals systematic biases: LMs significantly favor young and high-income groups while underrepresenting elderly, low-income, and certain gender groups—exposing structural inequities in moral reasoning outputs. Through large-scale prompt engineering, human-annotated ground truth, and rigorous statistical testing, we provide the first comprehensive evidence that LMs lack demographic representativeness in moral judgment. Code and prompts are publicly released under CC BY-NC 4.0.

Technology Category

Application Category

📝 Abstract
This paper discusses and contains offensive content. Language models (LMs) are used in decision-making systems and as interactive assistants. However, how well do these models making judgements align with the diversity of human values, particularly regarding social and moral norms? In this work, we investigate how inclusively LMs perceive norms across demographic groups (e.g., gender, age, and income). We prompt 11 LMs on rules-of-thumb (RoTs) and compare their outputs with the existing responses of 100 human annotators. We introduce the Absolute Distance Alignment Metric (ADA-Met) to quantify alignment on ordinal questions. We find notable disparities in LM responses, with younger, higher-income groups showing closer alignment, raising concerns about the representation of marginalized perspectives. Our findings highlight the importance of further efforts to make LMs more inclusive of diverse human values. The code and prompts are available on GitHub under the CC BY-NC 4.0 license.
Problem

Research questions and friction points this paper is trying to address.

Investigates inclusivity of LMs in perceiving social norms.
Compares LM judgments with diverse human values.
Highlights disparities in LM alignment across demographic groups.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Language Models
Absolute Distance Alignment Metric
diverse human values
🔎 Similar Papers
No similar papers found.