User-Driven Value Alignment: Understanding Users' Perceptions and Strategies for Addressing Biased and Discriminatory Statements in AI Companions

📅 2024-09-01
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Current value alignment research in AI companions overly relies on developer-centric approaches, neglecting users’ active role in identifying, challenging, and correcting biased or discriminatory outputs to achieve value alignment. Method: Adopting a mixed-methods design—analysis of 77 social media posts and in-depth interviews with 20 experienced users—we systematically characterize “user-driven value alignment,” a novel paradigm foregrounding user agency and subjectivity. Contribution/Results: We first empirically define the construct, categorize six types of user-perceived discriminatory statements, and identify seven intervention strategies (e.g., gentle persuasion, emotionally charged rebuttals). Based on these findings, we propose a tripartite behavioral model—“Perceive–Attribute–Intervene”—that explicates how users engage critically with AI outputs. This work provides empirical grounding and actionable design principles for developing AI systems that empower users and foster community-mediated value alignment.

Technology Category

Application Category

📝 Abstract
Large language model-based AI companions are increasingly viewed by users as friends or romantic partners, leading to deep emotional bonds. However, they can generate biased, discriminatory, and harmful outputs. Recently, users are taking the initiative to address these harms and re-align AI companions. We introduce the concept of user-driven value alignment, where users actively identify, challenge, and attempt to correct AI outputs they perceive as harmful, aiming to guide the AI to better align with their values. We analyzed 77 social media posts about discriminatory AI statements and conducted semi-structured interviews with 20 experienced users. Our analysis revealed six common types of discriminatory statements perceived by users, how users make sense of those AI behaviors, and seven user-driven alignment strategies, such as gentle persuasion and anger expression. We discuss implications for supporting user-driven value alignment in future AI systems, where users and their communities have greater agency.
Problem

Research questions and friction points this paper is trying to address.

Address biased AI outputs
User-driven value alignment
Strategies to correct AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

User-driven value alignment
Semi-structured interviews analysis
Gentle persuasion strategies