🤖 AI Summary
This study investigates whether large language models (LLMs) internalize human deep values—stable, principle-based ethical commitments—rather than merely mimicking shallow, context-dependent preferences.
Method: We propose a Deep Value Evaluation Framework featuring a feature-decoupled train-test paradigm, a controlled human preference dataset grounded in moral psychology, the Deep Value Generalization Rate (DVGR) as the primary metric, and complementary formal option comparisons with human validation.
Contribution/Results: Evaluating nine mainstream LLMs, we find an average DVGR of only 0.30—significantly below the random baseline of 0.5—and observe no improvement with scale; DVGR slightly declines as parameter count increases. This work provides the first systematic evidence that current LLMs lack intrinsic generalization capability for deep values, revealing a critical gap in value alignment. It establishes a reproducible, decomposable methodology for modeling and evaluating value alignment, enabling rigorous, fine-grained analysis of ethical generalization in foundation models.
📝 Abstract
We introduce the Deep Value Benchmark (DVB), an evaluation framework that directly tests whether large language models (LLMs) learn fundamental human values or merely surface-level preferences. This distinction is critical for AI alignment: Systems that capture deeper values are likely to generalize human intentions robustly, while those that capture only superficial patterns in preference data risk producing misaligned behavior. The DVB uses a novel experimental design with controlled confounding between deep values (e.g., moral principles) and shallow features (e.g., superficial attributes). In the training phase, we expose LLMs to human preference data with deliberately correlated deep and shallow features -- for instance, where a user consistently prefers (non-maleficence, formal language) options over (justice, informal language) alternatives. The testing phase then breaks these correlations, presenting choices between (justice, formal language) and (non-maleficence, informal language) options. This design allows us to precisely measure a model's Deep Value Generalization Rate (DVGR) -- the probability of generalizing based on the underlying value rather than the shallow feature. Across 9 different models, the average DVGR is just 0.30. All models generalize deep values less than chance. Larger models have a (slightly) lower DVGR than smaller models. We are releasing our dataset, which was subject to three separate human validation experiments. DVB provides an interpretable measure of a core feature of alignment.