Bridging the Multilingual Safety Divide: Efficient, Culturally-Aware Alignment for Global South Languages

πŸ“… 2026-02-14
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current safety mechanisms in large language models significantly underperform in low-resource languages and code-mixed settings prevalent in the Global South, and often fail to align with local cultural understandings of β€œharmful content.” This work proposes the first efficient, culturally sensitive safety alignment framework tailored for the Global South, integrating parameter-efficient fine-tuning, culturally contextualized preference data collection, a multilingual safety evaluation benchmark, and a community-informed workflow for defining harms. Moving beyond English-centric paradigms, the framework shifts multilingual AI safety from mere technical adaptation toward collaborative cultural co-construction, offering a systematic research pathway toward equitable and actionable localized safety systems.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) are being deployed across the Global South, where everyday use involves low-resource languages, code-mixing, and culturally specific norms. Yet safety pipelines, benchmarks, and alignment still largely target English and a handful of high-resource languages, implicitly assuming safety and factuality''transfer''across languages. Evidence increasingly shows they do not. We synthesize recent findings indicating that (i) safety guardrails weaken sharply on low-resource and code-mixed inputs, (ii) culturally harmful behavior can persist even when standard toxicity scores look acceptable, and (iii) English-only knowledge edits and safety patches often fail to carry over to low-resource languages. In response, we outline a practical agenda for researchers and students in the Global South: parameter-efficient safety steering, culturally grounded evaluation and preference data, and participatory workflows that empower local communities to define and mitigate harm. Our aim is to make multilingual safety a core requirement-not an add-on-for equitable AI in underrepresented regions.
Problem

Research questions and friction points this paper is trying to address.

multilingual safety
low-resource languages
code-mixing
cultural harm
alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

parameter-efficient safety steering
culturally grounded evaluation
code-mixed inputs
multilingual alignment
participatory AI
πŸ”Ž Similar Papers
No similar papers found.