Fine-Tuning Large Language Models with QLoRA for Offensive Language Detection in Roman Urdu-English Code-Mixed Text

📅 2025-10-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of offensive language detection in Roman Urdu–English code-mixed text, characterized by nonstandard grammar, inconsistent orthography, and severe scarcity of annotated data. We propose a parameter-efficient fine-tuning framework based on QLoRA. Our method first aligns low-resource code-mixed text to English via Google Translate, then leverages English large language models (e.g., LLaMA-3 8B, Mistral-7B) and modern BERT-based models through transfer learning. To our knowledge, this is the first systematic application of QLoRA to multilingual code-mixed scenarios, effectively mitigating low-resource bottlenecks. Evaluated on a manually annotated dataset, LLaMA-3 8B achieves an F1 score of 91.45%, outperforming conventional models. These results validate the efficacy and scalability of our translation-augmented, lightweight fine-tuning paradigm for Urdu content moderation.

Technology Category

Application Category

📝 Abstract
The use of derogatory terms in languages that employ code mixing, such as Roman Urdu, presents challenges for Natural Language Processing systems due to unstated grammar, inconsistent spelling, and a scarcity of labeled data. In this work, we propose a QLoRA based fine tuning framework to improve offensive language detection in Roman Urdu-English text. We translated the Roman Urdu-English code mixed dataset into English using Google Translate to leverage English LLMs, while acknowledging that this translation reduces direct engagement with code mixing features. Our focus is on classification performance using English translated low resource inputs. We fine tuned several transformers and large language models, including Meta LLaMA 3 8B, Mistral 7B v0.1, LLaMA 2 7B, ModernBERT, and RoBERTa, with QLoRA for memory efficient adaptation. Models were trained and evaluated on a manually annotated Roman Urdu dataset for offensive vs non offensive content. Of all tested models, the highest F1 score of 91.45 was attained by Meta LLaMA 3 8B, followed by Mistral 7B at 89.66, surpassing traditional transformer baselines. These results demonstrate the efficacy of QLoRA in fine tuning high performing models for low resource environments such as code mixed offensive language detection, and confirm the potential of LLMs for this task. This work advances a scalable approach to Roman Urdu moderation and paves the way for future multilingual offensive detection systems based on LLMs.
Problem

Research questions and friction points this paper is trying to address.

Detecting offensive language in Roman Urdu-English code-mixed text
Addressing data scarcity and inconsistent spelling in low-resource languages
Improving classification performance using memory-efficient QLoRA fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

QLoRA fine-tuning for memory-efficient LLM adaptation
Translation of code-mixed text to leverage English LLMs
Meta LLaMA 3 8B achieved highest F1 score performance
🔎 Similar Papers
No similar papers found.
N
Nisar Hussain
Instituto Politécnico Nacional (IPN), Centro de Investigación en Computación (CIC), Mexico
A
Amna Qasim
Instituto Politécnico Nacional (IPN), Centro de Investigación en Computación (CIC), Mexico
G
Gull Mehak
Instituto Politécnico Nacional (IPN), Centro de Investigación en Computación (CIC), Mexico
M
Muhammad Zain
Instituto Politécnico Nacional (IPN), Centro de Investigación en Computación (CIC), Mexico
M
Momina Hafeez
Instituto Politécnico Nacional (IPN), Centro de Investigación en Computación (CIC), Mexico
Grigori Sidorov
Grigori Sidorov
Professor of Computational Linguistics, Instituto Politécnico Nacional (IPN), Mexico
Computational LinguisticsNatural Language ProcessingArtificial IntelligenceMachine Learning