π€ AI Summary
This work addresses the challenge of unsupervised, transferable representation and alignment of human values in large language models (LLMs)βspecifically, how to characterize their implicit value orientations without supervision. To this end, we propose UniVaR: a model- and data-agnostic, high-dimensional distributional representation of human values, enabling cross-lingual and cross-model value comparison. UniVaR is the first framework for general value modeling without large-scale human annotation, integrating multilingual semantic alignment with probabilistic distribution learning. We learn value representations from value-laden outputs of eight multilingual LLMs and validate UniVaR on Llama2, ChatGPT, JAIS, and Yi. Results reveal systematic cross-cultural and cross-lingual differences in value prioritization across models, demonstrating its efficacy in quantifying and interpreting value biases. UniVaR establishes a new paradigm for value interpretability and controllable alignment in foundation models.
π Abstract
The widespread application of Large Language Models (LLMs) across various tasks and fields has necessitated the alignment of these models with human values and preferences. Given various approaches of human value alignment, ranging from Reinforcement Learning with Human Feedback (RLHF), to constitutional learning, etc. there is an urgent need to understand the scope and nature of human values injected into these models before their release. There is also a need for model alignment without a costly large scale human annotation effort. We propose UniVaR, a high-dimensional representation of human value distributions in LLMs, orthogonal to model architecture and training data. Trained from the value-relevant output of eight multilingual LLMs and tested on the output from four multilingual LLMs, namely LlaMA2, ChatGPT, JAIS and Yi, we show that UniVaR is a powerful tool to compare the distribution of human values embedded in different LLMs with different langauge sources. Through UniVaR, we explore how different LLMs prioritize various values in different languages and cultures, shedding light on the complex interplay between human values and language modeling.