CultureGuard: Towards Culturally-Aware Dataset and Guard Model for Multilingual Safety Applications

📅 2025-08-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of insufficient cultural adaptation for non-English languages and scarcity of high-quality annotated data in multilingual large language model (LLM) content safety, this paper proposes a four-stage culturally aligned synthetic data generation and filtering framework to extend English safety datasets to eight non-English languages. Methodologically, it integrates cultural data isolation, culturally adaptive prompting, controllable machine translation, and multi-dimensional quality filtering, followed by efficient LoRA-based fine-tuning to build a multilingual safety guardian model. Key contributions include: (1) releasing the first culture-aware multilingual safety dataset covering nine languages with 386K high-quality samples; and (2) training an 8B-parameter multilingual safety model that achieves state-of-the-art performance on cross-lingual moderation benchmarks, significantly narrowing the safety capability gap across languages.

Technology Category

Application Category

📝 Abstract
The increasing use of Large Language Models (LLMs) in agentic applications highlights the need for robust safety guard models. While content safety in English is well-studied, non-English languages lack similar advancements due to the high cost of collecting culturally aligned labeled datasets. We present CultureGuard, a novel solution for curating culturally aligned, high-quality safety datasets across multiple languages. Our approach introduces a four-stage synthetic data generation and filtering pipeline: cultural data segregation, cultural data adaptation, machine translation, and quality filtering. This pipeline enables the conversion and expansion of the Nemotron-Content-Safety-Dataset-V2 English safety dataset into eight distinct languages: Arabic, German, Spanish, French, Hindi, Japanese, Thai, and Chinese. The resulting dataset, Nemotron-Content-Safety-Dataset-Multilingual-v1, comprises 386,661 samples in 9 languages and facilitates the training of Llama-3.1-Nemotron-Safety-Guard-Multilingual-8B-v1 via LoRA-based fine-tuning. The final model achieves state-of-the-art performance on several multilingual content safety benchmarks. We also benchmark the latest open LLMs on multilingual safety and observe that these LLMs are more prone to give unsafe responses when prompted in non-English languages. This work represents a significant step toward closing the safety gap in multilingual LLMs by enabling the development of culturally aware safety guard models.
Problem

Research questions and friction points this paper is trying to address.

Addressing lack of multilingual safety datasets for LLMs
Developing culturally-aware safety models for non-English languages
Improving content safety benchmarks across diverse languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Four-stage synthetic data generation pipeline
LoRA-based fine-tuning for multilingual safety
Culturally aligned dataset expansion to 9 languages
🔎 Similar Papers
No similar papers found.
Raviraj Joshi
Raviraj Joshi
Indian Institute of Technology Madras
computer sciencemachine learningnatural language processing
Rakesh Paul
Rakesh Paul
Senior Deep Learning Scientist, NVIDIA
Multilingual NLPLLMModel OptimisationLLM Safety
K
Kanishk Singla
NVIDIA
A
Anusha Kamath
NVIDIA
Michael Evans
Michael Evans
NVIDIA
K
Katherine Luna
NVIDIA
S
Shaona Ghosh
NVIDIA
U
Utkarsh Vaidya
NVIDIA
E
Eileen Long
NVIDIA
S
Sanjay Singh Chauhan
NVIDIA
N
Niranjan Wartikar
NVIDIA