UnWEIRDing LLM Entity Recommendations

📅 2025-11-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies cultural bias—particularly Western-centrism—in large language models’ (LLMs) entity recommendations for non-native English speakers in educational contexts. Methodologically, it pioneers the systematic application of the WEIRD (Western, Educated, Industrialized, Rich, Democratic) framework to analyze LLM recommendation bias, constructs a fine-grained, multi-cultural entity evaluation dataset, and proposes prompt-engineering-based debiasing strategies. Through cross-model experiments (e.g., Llama, Qwen, GPT series), it quantifies representational imbalances across geographic, linguistic, and cultural dimensions. Results show that prompt-based interventions significantly mitigate cultural bias in certain models, though efficacy varies by model architecture and entity category. The primary contribution is the establishment of the first evaluation paradigm for cultural diversity in LLM-driven entity recommendation, empirically delineating the effectiveness boundaries of prompt-level debiasing interventions.

Technology Category

Application Category

📝 Abstract
Large Language Models have been widely been adopted by users for writing tasks such as sentence completions. While this can improve writing efficiency, prior research shows that LLM-generated suggestions may exhibit cultural biases which may be difficult for users to detect, especially in educational contexts for non-native English speakers. While such prior work has studied the biases in LLM moral value alignment, we aim to investigate cultural biases in LLM recommendations for real-world entities. To do so, we use the WEIRD (Western, Educated, Industrialized, Rich and Democratic) framework to evaluate recommendations by various LLMs across a dataset of fine-grained entities, and apply pluralistic prompt-based strategies to mitigate these biases. Our results indicate that while such prompting strategies do reduce such biases, this reduction is not consistent across different models, and recommendations for some types of entities are more biased than others.
Problem

Research questions and friction points this paper is trying to address.

Investigating cultural biases in LLM recommendations for real-world entities
Evaluating entity recommendations using WEIRD framework across multiple models
Applying prompt strategies to mitigate cultural biases in LLM suggestions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using WEIRD framework to evaluate entity recommendation biases
Applying pluralistic prompt strategies for bias mitigation
Testing bias reduction across multiple LLM models
🔎 Similar Papers
No similar papers found.