🤖 AI Summary
Prior LLM alignment methods overlook sociocultural value diversity, assuming homogeneous human preferences. Method: We incorporate demographic attributes—gender, political orientation, and race—into human feedback collection, gathering 27,375 fine-grained preference ratings from 1,095 participants across the U.S. and Germany. We systematically evaluate alignment algorithms—including DPO and GRPO—on trade-offs between safety and inclusivity. Contribution/Results: First empirical evidence reveals systematic, demographically grounded disparities in how “safety” and “inclusivity” are judged across groups. Preserving annotator disagreement—rather than enforcing consensus—reduces toxicity by 53%. A five-point Likert scale outperforms binary labeling, further reducing toxicity by 22%. DPO consistently surpasses GRPO in multi-objective alignment. This work challenges the monolithic expert paradigm, providing both theoretical grounding and empirical validation for fair, robust value-aligned LLMs.
📝 Abstract
Although large language models (LLMs) are increasingly trained using human feedback for safety and alignment with human values, alignment decisions often overlook human social diversity. This study examines how incorporating pluralistic values affects LLM behavior by systematically evaluating demographic variation and design parameters in the alignment pipeline. We collected alignment data from US and German participants (N = 1,095, 27,375 ratings) who rated LLM responses across five dimensions: Toxicity, Emotional Awareness (EA), Sensitivity, Stereotypical Bias, and Helpfulness. We fine-tuned multiple Large Language Models and Large Reasoning Models using preferences from different social groups while varying rating scales, disagreement handling methods, and optimization techniques. The results revealed systematic demographic effects: male participants rated responses 18% less toxic than female participants; conservative and Black participants rated responses 27.9% and 44% more emotionally aware than liberal and White participants, respectively. Models fine-tuned on group-specific preferences exhibited distinct behaviors. Technical design choices showed strong effects: the preservation of rater disagreement achieved roughly 53% greater toxicity reduction than majority voting, and 5-point scales yielded about 22% more reduction than binary formats; and Direct Preference Optimization (DPO) consistently outperformed Group Relative Policy Optimization (GRPO) in multi-value optimization. These findings represent a preliminary step in answering a critical question: How should alignment balance expert-driven and user-driven signals to ensure both safety and fair representation?