Do Large Language Models Know Folktales? A Case Study of Yokai in Japanese Folktales

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study evaluates large language models’ (LLMs) comprehension of non-English folkloric knowledge—specifically Japanese yōkai—addressing a critical gap in cultural understanding assessment. Method: We introduce YokaiEval, the first folklore-oriented multiple-choice benchmark (809 items), to systematically evaluate 31 Japanese and multilingual LLMs. We pioneer a quantitative framework for measuring LLMs’ cultural awareness through folklore, employ zero-shot cross-model evaluation, and analyze the impact of Japanese continual pretraining. Contribution/Results: Our analysis reveals that Japanese-enhanced models—particularly Llama-3 base models fine-tuned on Japanese corpora—substantially outperform English-centric counterparts. YokaiEval fills a key void in evaluating non-English folkloric knowledge, and both the benchmark and evaluation code are publicly released to advance culturally aware LLM research.

Technology Category

Application Category

📝 Abstract
Although Large Language Models (LLMs) have demonstrated strong language understanding and generation abilities across various languages, their cultural knowledge is often limited to English-speaking communities, which can marginalize the cultures of non-English communities. To address the problem, evaluation of the cultural awareness of the LLMs and the methods to develop culturally aware LLMs have been investigated. In this study, we focus on evaluating knowledge of folktales, a key medium for conveying and circulating culture. In particular, we focus on Japanese folktales, specifically on knowledge of Yokai. Yokai are supernatural creatures originating from Japanese folktales that continue to be popular motifs in art and entertainment today. Yokai have long served as a medium for cultural expression, making them an ideal subject for assessing the cultural awareness of LLMs. We introduce YokaiEval, a benchmark dataset consisting of 809 multiple-choice questions (each with four options) designed to probe knowledge about yokai. We evaluate the performance of 31 Japanese and multilingual LLMs on this dataset. The results show that models trained with Japanese language resources achieve higher accuracy than English-centric models, with those that underwent continued pretraining in Japanese, particularly those based on Llama-3, performing especially well. The code and dataset are available at https://github.com/CyberAgentA ILab/YokaiEval.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' knowledge of Japanese folktales and Yokai
Evaluating cultural awareness in non-English language models
Developing benchmarks for culturally aware AI systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluating LLMs' cultural knowledge via YokaiEval
Using Japanese folktales for cultural awareness
Continued pretraining in Japanese boosts accuracy
🔎 Similar Papers
No similar papers found.