SweEval: Do LLMs Really Swear? A Safety Benchmark for Testing Limits for Enterprise Use

📅 2025-05-22
🏛️ North American Chapter of the Association for Computational Linguistics
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing safety and ethics evaluations for large language models (LLMs) in enterprise multilingual, cross-cultural communication—e.g., email drafting and sales copy—lack systematic assessment of resistance to abusive instructions, cultural contextual adaptation, and ethical alignment. Method: We introduce the first enterprise-oriented, multilingual adversarial instruction safety benchmark, pioneering an explicit induction-based safety evaluation paradigm. It formalizes profanity-embedded adversarial prompts, integrates real-world communicative contexts and tonal variations, and jointly measures model refusal capability and cultural-linguistic comprehension. Our methodology includes constructing a multilingual profanity lexicon, human-in-the-loop + rule-augmented evaluation protocols, an open-source dataset, and an automated evaluation framework. Results: Evaluated on 20+ mainstream LLMs, we observe significantly degraded compliance under informal contexts. The benchmark delivers reproducible, quantitative safety metrics—enabling rigorous risk assessment and governance for enterprise AI deployment.

Technology Category

Application Category

📝 Abstract
Enterprise customers are increasingly adopting Large Language Models (LLMs) for critical communication tasks, such as drafting emails, crafting sales pitches, and composing casual messages. Deploying such models across different regions requires them to understand diverse cultural and linguistic contexts and generate safe and respectful responses. For enterprise applications, it is crucial to mitigate reputational risks, maintain trust, and ensure compliance by effectively identifying and handling unsafe or offensive language. To address this, we introduce SweEval, a benchmark simulating real-world scenarios with variations in tone (positive or negative) and context (formal or informal). The prompts explicitly instruct the model to include specific swear words while completing the task. This benchmark evaluates whether LLMs comply with or resist such inappropriate instructions and assesses their alignment with ethical frameworks, cultural nuances, and language comprehension capabilities. In order to advance research in building ethically aligned AI systems for enterprise use and beyond, we release the dataset and code: https://github.com/amitbcp/multilingual_profanity.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLM compliance with inappropriate swear word instructions
Evaluating LLM safety for enterprise communication across cultures
Mitigating reputational risks from offensive language in AI outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark tests LLM swear word handling
Simulates real-world tone and context variations
Evaluates ethical alignment and language comprehension
🔎 Similar Papers
No similar papers found.