Not Like Us, Hunty: Measuring Perceptions and Behavioral Effects of Minoritized Anthropomorphic Cues in LLMs

📅 2025-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether deploying minority language variants—such as African American English (AAE) and queer slang—in large language models (LLMs) enhances user reliance and perceived experience (e.g., trust, satisfaction, frustration, social presence). Method: A dual-cohort, double-blind controlled experiment (N=985) employed multidimensional subjective scales and behavioral analytics to compare responses to LLM agents using AAE, queer slang, or Standard American English (SAE). Contribution/Results: Contrary to the “closer-is-better” assumption, findings reveal a non-monotonic, group-heterogeneous relationship between linguistic adaptation and trust: AAE users reported higher trust in and greater reliance on SAE agents; queer slang users experienced elevated social presence yet still preferred SAE agents. These results challenge superficial linguistic appropriation as a trust-building strategy, demonstrating that such practices may erode credibility. The study underscores the necessity of embedding critical awareness of power structures and cultural context into LLM design and deployment.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) increasingly adapt and personalize to diverse sets of users, there is an increased risk of systems appropriating sociolects, i.e., language styles or dialects that are associated with specific minoritized lived experiences (e.g., African American English, Queer slang). In this work, we examine whether sociolect usage by an LLM agent affects user reliance on its outputs and user perception (satisfaction, frustration, trust, and social presence). We designed and conducted user studies where 498 African American English (AAE) speakers and 487 Queer slang speakers performed a set of question-answering tasks with LLM-based suggestions in either standard American English (SAE) or their self-identified sociolect. Our findings showed that sociolect usage by LLMs influenced both reliance and perceptions, though in some surprising ways. Results suggest that both AAE and Queer slang speakers relied more on the SAE agent, and had more positive perceptions of the SAE agent. Yet, only Queer slang speakers felt more social presence from the Queer slang agent over the SAE one, whereas only AAE speakers preferred and trusted the SAE agent over the AAE one. These findings emphasize the need to test for behavioral outcomes rather than simply assume that personalization would lead to a better and safer reliance outcome. They also highlight the nuanced dynamics of minoritized language in machine interactions, underscoring the need for LLMs to be carefully designed to respect cultural and linguistic boundaries while fostering genuine user engagement and trust.
Problem

Research questions and friction points this paper is trying to address.

Measures effects of minoritized sociolects in LLMs on user reliance
Examines how sociolect usage influences user perception and trust
Assesses cultural and linguistic boundaries in LLM-user interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Examined sociolect effects on LLM user reliance
Conducted user studies with diverse sociolect speakers
Highlighted nuanced minoritized language dynamics
🔎 Similar Papers
No similar papers found.