🤖 AI Summary
This work proposes a novel evaluation framework for value alignment in large language models that moves beyond marginal distributions to incorporate the multivariate joint distribution of cultural values as observed in real populations. Leveraging data from the World Values Survey, the study compares two alignment approaches—role prompting and demographic fine-tuning—assessing their fidelity not only in matching marginal distributions but also in capturing the complex correlational structures among values. While demographic fine-tuning better aligns with marginal distributions, both methods fail to reproduce the authentic patterns of interdependence present in human populations. These findings reveal “representativeness”—the ability to reflect real-world multivariate value correlations—as a critical dimension of alignment distinct from marginal alignment, thereby offering a more comprehensive perspective for evaluating value alignment in language models.
📝 Abstract
Large language models are increasingly used to represent human opinions, values, or beliefs, and their steerability towards these ideals is an active area of research. Existing work focuses predominantly on aligning marginal response distributions, treating each survey item independently. While essential, this may overlook deeper latent structures that characterise real populations and underpin cultural values theories. We propose a framework for evaluating the representativeness of aligned models through multivariate correlation patterns in addition to marginal distributions. We show the value of our evaluation scheme by comparing two model steering techniques (persona prompting and demographic fine-tuning) and evaluating them against human responses from the World Values Survey. While the demographically fine-tuned model better approximates marginal response distributions than persona prompting, both techniques fail to fully capture the gold standard correlation patterns. We conclude that representativeness is a distinct aspect of value alignment and an evaluation focused on marginals can mask structural failures, leading to overly optimistic conclusions about model capabilities.