Presumed Cultural Identity: How Names Shape LLM Responses

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study exposes systemic cultural bias in large language models (LLMs) arising from implicit cultural identity inference based on user names. Addressing inappropriate associations (e.g., “name → nationality/religion/values”), we design controlled prompting experiments pairing multicultural names with advice-seeking queries and conduct cross-lingual, comparative evaluations across mainstream LLMs. Employing mixed qualitative and quantitative analysis, we provide the first empirical quantification of name-induced cultural presupposition bias. Results confirm pervasive cross-cultural stereotypical inference across models, revealing a previously overlooked structural bias in personalization mechanisms. Our contributions are threefold: (1) a reproducible evaluation framework for cultural bias detection; (2) the first empirically grounded benchmark for name-driven cultural presuppositions; and (3) theoretical insights and actionable technical pathways toward debiased personalization.

Technology Category

Application Category

📝 Abstract
Names are deeply tied to human identity. They can serve as markers of individuality, cultural heritage, and personal history. However, using names as a core indicator of identity can lead to over-simplification of complex identities. When interacting with LLMs, user names are an important point of information for personalisation. Names can enter chatbot conversations through direct user input (requested by chatbots), as part of task contexts such as CV reviews, or as built-in memory features that store user information for personalisation. We study biases associated with names by measuring cultural presumptions in the responses generated by LLMs when presented with common suggestion-seeking queries, which might involve making assumptions about the user. Our analyses demonstrate strong assumptions about cultural identity associated with names present in LLM generations across multiple cultures. Our work has implications for designing more nuanced personalisation systems that avoid reinforcing stereotypes while maintaining meaningful customisation.
Problem

Research questions and friction points this paper is trying to address.

Bias in LLM responses
Cultural identity assumptions
Stereotypes in personalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing cultural biases in LLM responses
Measuring name-associated cultural presumptions
Designing nuanced personalisation systems