🤖 AI Summary
Existing LLM cultural adaptation benchmarks lack ecological validity, failing to reflect authentic cross-cultural interaction scenarios. Method: We propose CulturaBench—the first evaluation framework grounded in sociocultural theory—centered on the dynamic evolution of linguistic style across situational, relational, and cultural contexts. It innovatively introduces three core dimensions for cross-cultural NLP assessment: conversational framing, stylistic sensitivity, and subjective correctness. A diverse, multicultural annotator cohort co-constructed a high-ecological-validity benchmark dataset, featuring multi-dimensional human annotations and contextualized dialogue test cases. Contribution/Results: Empirical evaluation reveals systematic deficiencies in mainstream LLMs regarding dynamic stylistic adaptation and implicit cultural norm comprehension. CulturaBench establishes a theory-driven, reproducible evaluation paradigm and benchmark resource for culturally aware dialogue systems.
📝 Abstract
Existing benchmarks that measure cultural adaptation in LLMs are misaligned with the actual challenges these models face when interacting with users from diverse cultural backgrounds. In this work, we introduce the first framework and benchmark designed to evaluate LLMs in realistic, multicultural conversational settings. Grounded in sociocultural theory, our framework formalizes how linguistic style - a key element of cultural communication - is shaped by situational, relational, and cultural context. We construct a benchmark dataset based on this framework, annotated by culturally diverse raters, and propose a new set of desiderata for cross-cultural evaluation in NLP: conversational framing, stylistic sensitivity, and subjective correctness. We evaluate today's top LLMs on our benchmark and show that these models struggle with cultural adaptation in a conversational setting.