🤖 AI Summary
Current large language models (LLMs) predominantly rely on averaged alignment criteria (e.g., HHH), rendering them ill-suited for complex, imbalanced value systems across diverse cultural contexts. We identify two key bottlenecks: (1) high-order interdependencies and relative prioritization among values—termed *value complexity*—and (2) insufficient controllability over marginalized groups’ values—termed *value manipulability*. To address these, we propose COUPLE, the first framework to integrate structural causal models (SCMs) into value alignment. COUPLE explicitly encodes the causal dependency structure among multidimensional values and models their counterfactual influence on model generations, enabling fine-grained, interpretable, and prioritized value steering. Experiments on a dual-value-system benchmark demonstrate that COUPLE significantly improves both alignment accuracy and controllability across heterogeneous value objectives, outperforming state-of-the-art baselines.
📝 Abstract
As large language models (LLMs) become increasingly integrated into applications serving users across diverse cultures, communities and demographics, it is critical to align LLMs with pluralistic human values beyond average principles (e.g., HHH). In psychological and social value theories such as Schwartz's Value Theory, pluralistic values are represented by multiple value dimensions paired with various priorities. However, existing methods encounter two challenges when aligning with such fine-grained value objectives: 1) they often treat multiple values as independent and equally important, ignoring their interdependence and relative priorities (value complexity); 2) they struggle to precisely control nuanced value priorities, especially those underrepresented ones (value steerability). To handle these challenges, we propose COUPLE, a COUnterfactual reasoning framework for PLuralistic valuE alignment. It introduces a structural causal model (SCM) to feature complex interdependency and prioritization among features, as well as the causal relationship between high-level value dimensions and behaviors. Moreover, it applies counterfactual reasoning to generate outputs aligned with any desired value objectives. Benefitting from explicit causal modeling, COUPLE also provides better interpretability. We evaluate COUPLE on two datasets with different value systems and demonstrate that COUPLE advances other baselines across diverse types of value objectives.