CVC: A Large-Scale Chinese Value Rule Corpus for Value Alignment of Large Language Models

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM value evaluation suffers from Western-centrism, absence of a Chinese-aligned value framework, and poor scalability in cross-cultural scenario generation. To address these issues, we propose the first hierarchical value system grounded in core Chinese values—comprising three dimensions, twelve core values, and fifty derived values—alongside CVC, a large-scale Chinese Value Corpus containing over 250K human-enhanced value rules and 400K moral dilemma scenarios. We introduce a rule-driven, cross-culturally sensitive scenario generation paradigm integrating hierarchical value modeling, rule-augmented annotation, adversarial generation of sensitive topics, and multi-model preference evaluation. Experiments demonstrate that CVC-guided scenarios significantly outperform baselines in value boundary fidelity and scenario diversity; seven mainstream LLMs select CVC-aligned options in over 70.5% of cases across six sensitive issue categories; and five Chinese annotators achieve an 87.5% agreement rate on CVC’s value expression validity.

Technology Category

Application Category

📝 Abstract
Ensuring that Large Language Models (LLMs) align with mainstream human values and ethical norms is crucial for the safe and sustainable development of AI. Current value evaluation and alignment are constrained by Western cultural bias and incomplete domestic frameworks reliant on non-native rules; furthermore, the lack of scalable, rule-driven scenario generation methods makes evaluations costly and inadequate across diverse cultural contexts. To address these challenges, we propose a hierarchical value framework grounded in core Chinese values, encompassing three main dimensions, 12 core values, and 50 derived values. Based on this framework, we construct a large-scale Chinese Values Corpus (CVC) containing over 250,000 value rules enhanced and expanded through human annotation. Experimental results show that CVC-guided scenarios outperform direct generation ones in value boundaries and content diversity. In the evaluation across six sensitive themes (e.g., surrogacy, suicide), seven mainstream LLMs preferred CVC-generated options in over 70.5% of cases, while five Chinese human annotators showed an 87.5% alignment with CVC, confirming its universality, cultural relevance, and strong alignment with Chinese values. Additionally, we construct 400,000 rule-based moral dilemma scenarios that objectively capture nuanced distinctions in conflicting value prioritization across 17 LLMs. Our work establishes a culturally-adaptive benchmarking framework for comprehensive value evaluation and alignment, representing Chinese characteristics. All data are available at https://huggingface.co/datasets/Beijing-AISI/CVC, and the code is available at https://github.com/Beijing-AISI/CVC.
Problem

Research questions and friction points this paper is trying to address.

Addressing Western bias in LLM value alignment
Lack of scalable Chinese value evaluation frameworks
High-cost, inadequate cross-cultural scenario generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical Chinese value framework with 50 values
Large-scale corpus with 250k human-annotated rules
Rule-based moral dilemma scenarios for 17 LLMs
P
Ping Wu
BrainCog Lab, CASIA; University of Chinese Academy of Sciences
G
Guobin Shen
BrainCog Lab, CASIA; Beijing Institute of AI Safety and Governance; Long-term AI Lab
Dongcheng Zhao
Dongcheng Zhao
Beijing Institute of AI Safety and Governance
Spiking Neural NetworksEvent Based VisionBrain-inspired AILLM Safety
Y
Yuwei Wang
BrainCog Lab, CASIA; Beijing Institute of AI Safety and Governance; Long-term AI Lab
Yiting Dong
Yiting Dong
Peking University, Institute of Automation, CAS
Brain Inspired IntelligenceSpiking Neural NetworkEvent-based VisionLarge Language Model
Y
Yu Shi
University of Chinese Academy of Sciences
E
Enmeng Lu
BrainCog Lab, CASIA; Beijing Institute of AI Safety and Governance; Long-term AI Lab
F
Feifei Zhao
BrainCog Lab, CASIA; Beijing Institute of AI Safety and Governance; Long-term AI Lab
Y
Yi Zeng
BrainCog Lab, CASIA; University of Chinese Academy of Sciences; Beijing Institute of AI Safety and Governance; Long-term AI Lab