MVPBench: A Benchmark and Fine-Tuning Framework for Aligning Large Language Models with Diverse Human Values

๐Ÿ“… 2025-09-09
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing value alignment benchmarks largely overlook cultural and demographic diversity, leading to inadequate evaluation of modelsโ€™ global applicability. To address this, we propose MVPBenchโ€”the first comprehensive benchmark for value alignment covering 75 countries, comprising 24,000 human-annotated samples with fine-grained demographic metadata. It establishes a systematic evaluation framework supporting culture-adaptive and personalized value modeling. Methodologically, we integrate Low-Rank Adaptation (LoRA) with Direct Preference Optimization (DPO), leveraging multidimensional value labels and persona-aware prompting for efficient, lightweight fine-tuning. Experiments demonstrate substantial improvements in both in-domain and cross-domain value alignment performance, uncovering significant regional and demographic disparities in alignment capability. This work provides a scalable, empirically grounded data infrastructure and a technically viable pathway toward fair, inclusive, and globally robust AI systems.

Technology Category

Application Category

๐Ÿ“ Abstract
The alignment of large language models (LLMs) with human values is critical for their safe and effective deployment across diverse user populations. However, existing benchmarks often neglect cultural and demographic diversity, leading to limited understanding of how value alignment generalizes globally. In this work, we introduce MVPBench, a novel benchmark that systematically evaluates LLMs' alignment with multi-dimensional human value preferences across 75 countries. MVPBench contains 24,020 high-quality instances annotated with fine-grained value labels, personalized questions, and rich demographic metadata, making it the most comprehensive resource of its kind to date. Using MVPBench, we conduct an in-depth analysis of several state-of-the-art LLMs, revealing substantial disparities in alignment performance across geographic and demographic lines. We further demonstrate that lightweight fine-tuning methods, such as Low-Rank Adaptation (LoRA) and Direct Preference Optimization (DPO), can significantly enhance value alignment in both in-domain and out-of-domain settings. Our findings underscore the necessity for population-aware alignment evaluation and provide actionable insights for building culturally adaptive and value-sensitive LLMs. MVPBench serves as a practical foundation for future research on global alignment, personalized value modeling, and equitable AI development.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM alignment with diverse global human values
Addressing cultural and demographic diversity gaps in benchmarks
Enhancing value alignment via fine-tuning methods across populations
Innovation

Methods, ideas, or system contributions that make the work stand out.

MVPBench benchmark for global value alignment
Lightweight fine-tuning with LoRA and DPO
Population-aware evaluation with demographic metadata
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yao Liang
Brain-inspired Cognitive AI Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.
Dongcheng Zhao
Dongcheng Zhao
Beijing Institute of AI Safety and Governance
Spiking Neural NetworksEvent Based VisionBrain-inspired AILLM Safety
F
Feifei Zhao
Brain-inspired Cognitive AI Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China.
G
Guobin Shen
Brain-inspired Cognitive AI Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Future Technology, University of Chinese Academy of Sciences, Beijing, China.
Y
Yuwei Wang
Brain-inspired Cognitive AI Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Center for Long-term AI, Beijing, China.
D
Dongqi Liang
Brain-inspired Cognitive AI Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.
Y
Yi Zeng
Brain-inspired Cognitive AI Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China; School of Future Technology, University of Chinese Academy of Sciences, Beijing, China; Beijing Key Laboratory of Safe AI and Superalignment, Beijing, China; Center for Long-term AI, Beijing, China.