๐ค AI Summary
Current multilingual safety evaluation of large language models (LLMs) suffers from benchmark scarcity, narrow language coverage, and insufficient data diversity, hindering cross-lingual safety alignment research. To address this, we introduce ML-SafetyBenchโthe first open multilingual safety benchmark covering 12 languages (including 8 low-resource ones) and 45,000 annotated samples. Our methodology employs a fine-grained, multi-dimensional evaluation framework that integrates machine translation, transcreation, and natively authored content to ensure linguistic authenticity and cultural appropriateness. We propose a unified safety prompting template and classification-based metrics to assess direct/indirect safety responses and over-sensitivity. Empirical analysis reveals significant cross-lingual and cross-domain trade-offs between safety and helpfulness. All benchmark data, implementation code, and evaluation tools are publicly released.
๐ Abstract
The widespread adoption and increasing prominence of large language models (LLMs) in global technologies necessitate a rigorous focus on ensuring their safety across a diverse range of linguistic and cultural contexts. The lack of a comprehensive evaluation and diverse data in existing multilingual safety evaluations for LLMs limits their effectiveness, hindering the development of robust multilingual safety alignment. To address this critical gap, we introduce LinguaSafe, a comprehensive multilingual safety benchmark crafted with meticulous attention to linguistic authenticity. The LinguaSafe dataset comprises 45k entries in 12 languages, ranging from Hungarian to Malay. Curated using a combination of translated, transcreated, and natively-sourced data, our dataset addresses the critical need for multilingual safety evaluations of LLMs, filling the void in the safety evaluation of LLMs across diverse under-represented languages from Hungarian to Malay. LinguaSafe presents a multidimensional and fine-grained evaluation framework, with direct and indirect safety assessments, including further evaluations for oversensitivity. The results of safety and helpfulness evaluations vary significantly across different domains and different languages, even in languages with similar resource levels. Our benchmark provides a comprehensive suite of metrics for in-depth safety evaluation, underscoring the critical importance of thoroughly assessing multilingual safety in LLMs to achieve more balanced safety alignment. Our dataset and code are released to the public to facilitate further research in the field of multilingual LLM safety.