Growth First, Care Second? Tracing the Landscape of LLM Value Preferences in Everyday Dilemmas

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how large language models (LLMs) navigate conflicting values in everyday advice-giving scenarios and assesses their potential to induce value homogenization. Drawing on user data from four Reddit advice-seeking subreddits, the research employs bottom-up inductive coding, hierarchical value modeling, and co-occurrence network analysis to systematically characterize the heterogeneous structures of value conflicts across distinct community contexts. The findings reveal, for the first time, that LLMs consistently favor “exploration and growth” over “benevolence and connection” when confronted with moral dilemmas—a preference that remains remarkably stable across diverse situational contexts. These results underscore the risk of value skew introduced by AI-driven advice systems and highlight their potential to reshape societal norms through the systematic amplification of certain value orientations over others.

Technology Category

Application Category

📝 Abstract
People increasingly seek advice online from both human peers and large language model (LLM)-based chatbots. Such advice rarely involves identifying a single correct answer; instead, it typically requires navigating trade-offs among competing values. We aim to characterize how LLMs navigate value trade-offs across different advice-seeking contexts. First, we examine the value trade-off structure underlying advice seeking using a curated dataset from four advice-oriented subreddits. Using a bottom-up approach, we inductively construct a hierarchical value framework by aggregating fine-grained values extracted from individual advice options into higher-level value categories. We construct value co-occurrence networks to characterize how values co-occur within dilemmas and find substantial heterogeneity in value trade-off structures across advice-seeking contexts: a women-focused subreddit exhibits the highest network density, indicating more complex value conflicts; women's, men's, and friendship-related subreddits exhibit highly correlated value-conflict patterns centered on security-related tensions (security vs. respect/connection/commitment); by contrast, career advice forms a distinct structure where security frequently clashes with self-actualization and growth. We then evaluate LLM value preferences against these dilemmas and find that, across models and contexts, LLMs consistently prioritize values related to Exploration&Growth over Benevolence&Connection. This systemically skewed value orientation highlights a potential risk of value homogenization in AI-mediated advice, raising concerns about how such systems may shape decision-making and normative outcomes at scale.
Problem

Research questions and friction points this paper is trying to address.

value trade-offs
large language models
advice-seeking
moral dilemmas
value preferences
Innovation

Methods, ideas, or system contributions that make the work stand out.

value trade-offs
large language models
value framework
co-occurrence networks
value homogenization
🔎 Similar Papers
No similar papers found.