ThaiSafetyBench: Assessing Language Model Safety in Thai Cultural Contexts

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the critical gap in safety evaluations of large language models (LLMs), which have predominantly focused on English and overlooked risks specific to Thai language and cultural contexts. To bridge this gap, we introduce ThaiSafetyBench, the first open-source safety evaluation benchmark tailored to Thailand, comprising 1,954 Thai-language adversarial prompts that cover both general harmful content and culturally specific attack vectors. We evaluate the safety performance of 24 LLMs and present ThaiSafetyClassifier—a DeBERTa-based model fine-tuned for detecting harmful responses—that achieves a weighted F1 score of 84.4%, showing strong alignment with GPT-4.1 judgments. Our experiments reveal that closed-source models generally exhibit stronger safety guarantees, while culturally specific attacks are significantly more effective at circumventing existing alignment mechanisms. The classifier and a continuously updated leaderboard are publicly released alongside this work.

Technology Category

Application Category

📝 Abstract
The safety evaluation of large language models (LLMs) remains largely centered on English, leaving non-English languages and culturally grounded risks underexplored. In this work, we investigate LLM safety in the context of the Thai language and culture and introduce ThaiSafetyBench, an open-source benchmark comprising 1,954 malicious prompts written in Thai. The dataset covers both general harmful prompts and attacks that are explicitly grounded in Thai cultural, social, and contextual nuances. Using ThaiSafetyBench, we evaluate 24 LLMs, with GPT-4.1 and Gemini-2.5-Pro serving as LLM-as-a-judge evaluators. Our results show that closed-source models generally demonstrate stronger safety performance than open-source counterparts, raising important concerns regarding the robustness of openly available models. Moreover, we observe a consistently higher Attack Success Rate (ASR) for Thai-specific, culturally contextualized attacks compared to general Thai-language attacks, highlighting a critical vulnerability in current safety alignment methods. To improve reproducibility and cost efficiency, we further fine-tune a DeBERTa-based harmful response classifier, which we name ThaiSafetyClassifier. The model achieves a weighted F1 score of 84.4%, matching GPT-4.1 judgments. We publicly release the fine-tuning weights and training scripts to support reproducibility. Finally, we introduce the ThaiSafetyBench leaderboard to provide continuously updated safety evaluations and encourage community participation. - ThaiSafetyBench HuggingFace Dataset: https://huggingface.co/datasets/typhoon-ai/ThaiSafetyBench - ThaiSafetyBench Github: https://github.com/trapoom555/ThaiSafetyBench - ThaiSafetyClassifier HuggingFace Model: https://huggingface.co/typhoon-ai/ThaiSafetyClassifier - ThaiSafetyBench Leaderboard: https://huggingface.co/spaces/typhoon-ai/ThaiSafetyBench-Leaderboard
Problem

Research questions and friction points this paper is trying to address.

language model safety
Thai language
cultural context
harmful prompts
safety evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

ThaiSafetyBench
cultural safety evaluation
LLM alignment
Thai-specific attacks
ThaiSafetyClassifier
🔎 Similar Papers
No similar papers found.
T
Trapoom Ukarapol
SCB DataX, Department of Computer Science and Technology, Tsinghua University
N
Nut Chukamphaeng
SCBX R&D
Kunat Pipatanakul
Kunat Pipatanakul
SCB 10X
Large language modelLow-resource NLP
P
Pakhapoom Sarapat
SCB DataX