Culturally-Aware Conversations: A Framework & Benchmark for LLMs

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM cultural adaptation benchmarks lack ecological validity, failing to reflect authentic cross-cultural interaction scenarios. Method: We propose CulturaBench—the first evaluation framework grounded in sociocultural theory—centered on the dynamic evolution of linguistic style across situational, relational, and cultural contexts. It innovatively introduces three core dimensions for cross-cultural NLP assessment: conversational framing, stylistic sensitivity, and subjective correctness. A diverse, multicultural annotator cohort co-constructed a high-ecological-validity benchmark dataset, featuring multi-dimensional human annotations and contextualized dialogue test cases. Contribution/Results: Empirical evaluation reveals systematic deficiencies in mainstream LLMs regarding dynamic stylistic adaptation and implicit cultural norm comprehension. CulturaBench establishes a theory-driven, reproducible evaluation paradigm and benchmark resource for culturally aware dialogue systems.

Technology Category

Application Category

📝 Abstract
Existing benchmarks that measure cultural adaptation in LLMs are misaligned with the actual challenges these models face when interacting with users from diverse cultural backgrounds. In this work, we introduce the first framework and benchmark designed to evaluate LLMs in realistic, multicultural conversational settings. Grounded in sociocultural theory, our framework formalizes how linguistic style - a key element of cultural communication - is shaped by situational, relational, and cultural context. We construct a benchmark dataset based on this framework, annotated by culturally diverse raters, and propose a new set of desiderata for cross-cultural evaluation in NLP: conversational framing, stylistic sensitivity, and subjective correctness. We evaluate today's top LLMs on our benchmark and show that these models struggle with cultural adaptation in a conversational setting.
Problem

Research questions and friction points this paper is trying to address.

Existing cultural benchmarks misalign with real LLM interaction challenges
Introducing first framework evaluating LLMs in multicultural conversational settings
Current LLMs struggle with cultural adaptation during conversations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Framework evaluates LLMs in multicultural conversational settings
Benchmark dataset annotated by culturally diverse raters
Proposes desiderata for cross-cultural evaluation in NLP
🔎 Similar Papers
No similar papers found.