One fish, two fish, but not the whole sea: Alignment reduces language models' conceptual diversity

📅 2024-11-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether post-training alignment—via RLHF or RLAIF—degrades conceptual diversity in large language models (LLMs), exposing a potential trade-off between value alignment and representational diversity. Method: We introduce the first population-level conceptual diversity metric grounded in human behavioral research, simulating an LLM “population” through sampling and quantifying inter-individual variation across two domains with rich human benchmark data. Contribution/Results: Experiments show that all aligned models exhibit significantly lower conceptual diversity than their unaligned counterparts—and fall short of human-level diversity. These findings suggest alignment may compress the semantic representation space. This study pioneers the transfer of human cognitive diversity paradigms to LLM analysis, offering both a theoretical caution for trustworthy AI development and a novel, quantifiable evaluation framework for alignment-induced representational effects.

Technology Category

Application Category

📝 Abstract
Researchers in social science and psychology have recently proposed using large language models (LLMs) as replacements for humans in behavioral research. In addition to arguments about whether LLMs accurately capture population-level patterns, this has raised questions about whether LLMs capture human-like conceptual diversity. Separately, it is debated whether post-training alignment (RLHF or RLAIF) affects models' internal diversity. Inspired by human studies, we use a new way of measuring the conceptual diversity of synthetically-generated LLM"populations"by relating the internal variability of simulated individuals to the population-level variability. We use this approach to evaluate non-aligned and aligned LLMs on two domains with rich human behavioral data. While no model reaches human-like diversity, aligned models generally display less diversity than their instruction fine-tuned counterparts. Our findings highlight potential trade-offs between increasing models' value alignment and decreasing the diversity of their conceptual representations.
Problem

Research questions and friction points this paper is trying to address.

Measures conceptual diversity in LLM populations
Compares aligned vs non-aligned model diversity
Explores alignment's impact on representation diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Measures conceptual diversity in LLM populations
Compares aligned and non-aligned LLM diversity
Highlights alignment-diversity trade-offs in models
🔎 Similar Papers
No similar papers found.