Multicultural Spyfall: Assessing LLMs through Dynamic Multilingual Social Deduction Game

📅 2026-01-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing static benchmarks struggle to effectively evaluate the real-world social reasoning capabilities of large language models in multilingual and multicultural contexts, often suffering from data saturation and leakage. This work proposes the first dynamic evaluation framework based on the social deduction game Spyfall, which assesses models through multi-turn strategic dialogue tasks requiring them to identify hidden roles by leveraging culture-specific entities—such as local landmarks and foods—across diverse languages. Integrating multilingual and multicultural dimensions, the framework is designed to be scalable, robust against data leakage, and culturally sensitive. Experimental results reveal that models exhibit substantially weaker comprehension of local cultural entities in non-English settings, along with marked declines in rule adherence and strategic consistency. Notably, the evaluation outcomes align closely with rankings from Chatbot Arena.

Technology Category

Application Category

📝 Abstract
The rapid advancement of Large Language Models (LLMs) has necessitated more robust evaluation methods that go beyond static benchmarks, which are increasingly prone to data saturation and leakage. In this paper, we propose a dynamic benchmarking framework for evaluating multilingual and multicultural capabilities through the social deduction game Spyfall. In our setup, models must engage in strategic dialogue to either identify a secret agent or avoid detection, utilizing culturally relevant locations or local foods. Our results show that our game-based rankings align closely with the Chatbot Arena. However, we find a significant performance gap in non-English contexts: models are generally less proficient when handling locally specific entities and often struggle with rule-following or strategic integrity in non-English languages. We demonstrate that this game-based approach provides a scalable, leakage-resistant, and culturally nuanced alternative to traditional NLP benchmarks. The game history can be accessed here https://huggingface.co/datasets/haryoaw/cultural-spyfall.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
multilingual evaluation
cultural competence
social deduction game
benchmarking
Innovation

Methods, ideas, or system contributions that make the work stand out.

dynamic benchmarking
multilingual LLM evaluation
social deduction game
cultural nuance
leakage-resistant evaluation
🔎 Similar Papers
No similar papers found.