Measuring Human and AI Values based on Generative Psychometrics with Large Language Models

📅 2024-09-18
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of scalable, context-sensitive measurement of human and large language model (LLM) values and their implications for AI safety. To this end, we introduce Generative Psychometrics for Values (GPV), a novel paradigm that leverages LLMs to dynamically parse unstructured text, extract value-perception units, and quantify and aggregate value orientations in a free-form, context-aware manner. GPV constitutes the first text-perception-based, cross-subject (human/LLM) value measurement framework. Empirical validation on human blog corpora demonstrates GPV’s superior stability and construct validity over conventional psychometric scales. Furthermore, we systematically uncover systematic value biases across mainstream LLMs and establish their predictive influence on safety-critical behaviors—revealing a previously undocumented mechanism linking value misalignment to unsafe outputs. These findings provide both a new methodological paradigm and empirical grounding for value alignment research and AI safety governance.

Technology Category

Application Category

📝 Abstract
Human values and their measurement are long-standing interdisciplinary inquiry. Recent advances in AI have sparked renewed interest in this area, with large language models (LLMs) emerging as both tools and subjects of value measurement. This work introduces Generative Psychometrics for Values (GPV), an LLM-based, data-driven value measurement paradigm, theoretically grounded in text-revealed selective perceptions. The core idea is to dynamically parse unstructured texts into perceptions akin to static stimuli in traditional psychometrics, measure the value orientations they reveal, and aggregate the results. Applying GPV to human-authored blogs, we demonstrate its stability, validity, and superiority over prior psychological tools. Then, extending GPV to LLM value measurement, we advance the current art with 1) a psychometric methodology that measures LLM values based on their scalable and free-form outputs, enabling context-specific measurement; 2) a comparative analysis of measurement paradigms, indicating response biases of prior methods; and 3) an attempt to bridge LLM values and their safety, revealing the predictive power of different value systems and the impacts of various values on LLM safety. Through interdisciplinary efforts, we aim to leverage AI for next-generation psychometrics and psychometrics for value-aligned AI.
Problem

Research questions and friction points this paper is trying to address.

Develops a method to measure human and AI values using large language models.
Introduces Generative Psychometrics for Values (GPV) for dynamic value measurement.
Explores the relationship between LLM values and AI safety through psychometric analysis.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses LLMs for dynamic text parsing into perceptions
Measures value orientations from unstructured text data
Compares and analyzes LLM values for safety impacts
🔎 Similar Papers
No similar papers found.