JUBAKU: An Adversarial Benchmark for Exposing Culturally Grounded Stereotypes in Japanese LLMs

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing bias evaluation benchmarks for non-English large language models, which often rely on translated English datasets and fail to capture Japan-specific cultural biases—such as those rooted in hierarchical relationships, regional dialect differences, and traditional gender roles. To bridge this gap, the authors introduce JUBAKU, the first adversarial bias benchmark tailored to the Japanese sociocultural context. Developed by native speakers, JUBAKU comprises handcrafted dialogue scenarios spanning ten culturally relevant dimensions, combined with adversarial prompts and human annotations to systematically assess bias in Japanese large language models under sensitive social contexts. Experimental results reveal that nine prominent Japanese language models achieve an average accuracy of only 23% on JUBAKU, substantially lower than the 91% accuracy of human annotators, thereby demonstrating the benchmark’s effectiveness and its capacity to expose critical model shortcomings.

Technology Category

Application Category

📝 Abstract
Social biases reflected in language are inherently shaped by cultural norms, which vary significantly across regions and lead to diverse manifestations of stereotypes. Existing evaluations of social bias in large language models (LLMs) for non-English contexts, however, often rely on translations of English benchmarks. Such benchmarks fail to reflect local cultural norms, including those found in Japanese. For instance, Western benchmarks may overlook Japan-specific stereotypes related to hierarchical relationships, regional dialects, or traditional gender roles. To address this limitation, we introduce Japanese cUlture adversarial BiAs benchmarK Under handcrafted creation (JUBAKU), a benchmark tailored to Japanese cultural contexts. JUBAKU uses adversarial construction to expose latent biases across ten distinct cultural categories. Unlike existing benchmarks, JUBAKU features dialogue scenarios hand-crafted by native Japanese annotators, specifically designed to trigger and reveal latent social biases in Japanese LLMs. We evaluated nine Japanese LLMs on JUBAKU and three others adapted from English benchmarks. All models clearly exhibited biases on JUBAKU, performing below the random baseline of 50% with an average accuracy of 23% (ranging from 13% to 33%), despite higher accuracy on the other benchmarks. Human annotators achieved 91% accuracy in identifying unbiased responses, confirming JUBAKU's reliability and its adversarial nature to LLMs.
Problem

Research questions and friction points this paper is trying to address.

cultural bias
Japanese LLMs
stereotype evaluation
adversarial benchmark
social bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial benchmark
cultural bias
Japanese LLMs
handcrafted dialogue
social stereotypes
🔎 Similar Papers
T
Taihei Shiotani
Institute of Science Tokyo
Masahiro Kaneko
Masahiro Kaneko
Mohamed bin Zayed University of Artificial Intelligence
Natural Language Processing
A
Ayana Niwa
MBZUAI
Y
Yuki Maruyama
Institute of Science Tokyo
D
Daisuke Oba
Institute of Science Tokyo
M
Masanari Ohi
Institute of Science Tokyo
Naoaki Okazaki
Naoaki Okazaki
Institute of Science Tokyo
natural language processingartificial intelligencemachine learning