Evaluating and Aligning Human Economic Risk Preferences in LLMs

📅 2025-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the misalignment between large language models (LLMs) and human economic rationality in risk decision-making, particularly examining consistency of risk preferences across diverse user personas. Current LLMs lack behavioral-economic constraints, leading to risk behaviors that deviate significantly from human benchmarks. To bridge this gap, we propose the first systematic multi-persona risk preference evaluation framework and introduce a lightweight alignment mechanism grounded in risk-semantic constraints. Our approach integrates behavioral experiment design, persona-aware modeling, and constrained preference decoding—inspired by RLHF principles. Evaluated on 12 economic decision-making tasks, the aligned models achieve a 37.2% improvement in risk-behavior fidelity to human baselines; in complex scenarios, their risk-rationality scores match those of human experts (p < 0.01). This work establishes a principled interface between behavioral economics and LLM alignment, advancing both interpretability and human-consistent decision-making in foundation models.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly used in decision-making scenarios that involve risk assessment, yet their alignment with human economic rationality remains unclear. In this study, we investigate whether LLMs exhibit risk preferences consistent with human expectations across different personas. Specifically, we assess whether LLM-generated responses reflect appropriate levels of risk aversion or risk-seeking behavior based on individual's persona. Our results reveal that while LLMs make reasonable decisions in simplified, personalized risk contexts, their performance declines in more complex economic decision-making tasks. To address this, we propose an alignment method designed to enhance LLM adherence to persona-specific risk preferences. Our approach improves the economic rationality of LLMs in risk-related applications, offering a step toward more human-aligned AI decision-making.
Problem

Research questions and friction points this paper is trying to address.

Assess LLM alignment with human economic risk preferences.
Evaluate LLM risk behavior consistency across different personas.
Propose method to enhance LLM adherence to persona-specific risk preferences.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Assesses LLM risk preferences across personas
Proposes alignment method for risk preferences
Enhances LLM economic rationality in decisions
🔎 Similar Papers
No similar papers found.
J
Jiaxin Liu
The Hong Kong University of Science and Technology
Y
Yi Yang
The Hong Kong University of Science and Technology
Kar Yan Tam
Kar Yan Tam
Hong Kong University of Science and Technology
FintechSocial MediaPersonalization