CLASH: Evaluating Language Models on Judging High-Stakes Dilemmas from Multiple Perspectives

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work evaluates large language models’ (LLMs) higher-order value reasoning in high-stakes ethical dilemmas involving conflicting values, focusing on ambivalence, perceived psychological discomfort, and dynamic value migration. To this end, we introduce CLASH—a novel benchmark comprising 345 high-impact dilemmas annotated with 3,795 multi-role value labels—and propose the first role-perspective-driven evaluation framework enabling quantitative analysis of ambivalence, subjective discomfort, and temporal value evolution. Through cross-model benchmarking across ten state-of-the-art LLMs, along with first- vs. third-person perspective switching and value-controllability experiments, we find: (1) top-tier models achieve <50% accuracy in ambivalent scenarios; (2) while psychological discomfort prediction is relatively robust, models exhibit weak capability in modeling value migration; (3) inherent value biases severely limit controllability; and (4) third-person framing enhances value guidance, though certain value pairs perform better under first-person framing.

Technology Category

Application Category

📝 Abstract
Navigating high-stakes dilemmas involving conflicting values is challenging even for humans, let alone for AI. Yet prior work in evaluating the reasoning capabilities of large language models (LLMs) in such situations has been limited to everyday scenarios. To close this gap, this work first introduces CLASH (Character perspective-based LLM Assessments in Situations with High-stakes), a meticulously curated dataset consisting of 345 high-impact dilemmas along with 3,795 individual perspectives of diverse values. In particular, we design CLASH in a way to support the study of critical aspects of value-based decision-making processes which are missing from prior work, including understanding decision ambivalence and psychological discomfort as well as capturing the temporal shifts of values in characters' perspectives. By benchmarking 10 open and closed frontier models, we uncover several key findings. (1) Even the strongest models, such as GPT-4o and Claude-Sonnet, achieve less than 50% accuracy in identifying situations where the decision should be ambivalent, while they perform significantly better in clear-cut scenarios. (2) While LLMs reasonably predict psychological discomfort as marked by human, they inadequately comprehend perspectives involving value shifts, indicating a need for LLMs to reason over complex values. (3) Our experiments also reveal a significant correlation between LLMs' value preferences and their steerability towards a given value. (4) Finally, LLMs exhibit greater steerability when engaged in value reasoning from a third-party perspective, compared to a first-person setup, though certain value pairs benefit uniquely from the first-person framing.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs on high-stakes dilemmas with conflicting values
Assessing LLMs' understanding of decision ambivalence and discomfort
Measuring LLMs' ability to track temporal shifts in values
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces CLASH dataset for high-stakes dilemmas
Benchmarks 10 models on value-based decision-making
Analyzes ambivalence, discomfort, and value shifts
🔎 Similar Papers
No similar papers found.
Ayoung Lee
Ayoung Lee
University of Michigan
Natural Language ProcessingMachine Learning
R
Ryan Sungmo Kwon
Department of Computer Science and Engineering, University of Michigan
P
Peter Railton
Department of Philosophy, University of Michigan
L
Lu Wang
Department of Computer Science and Engineering, University of Michigan