Analyzing Islamophobic Discourse Using Semi-Coded Terms and LLMs

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the global (particularly Western) online propagation of Islamophobia by systematically investigating the usage mechanisms and impacts of semi-encrypted discriminatory terms—such as “muzrat” and “pislam”—on extremist social platforms. Methodologically, it integrates web crawling, large language model (LLM)-based semantic analysis, Google Perspective API toxicity scoring, and BERT-based topic modeling to construct a qualitative lexicon and conduct large-scale empirical analysis. Results demonstrate that LLMs effectively identify out-of-vocabulary (OOV) semi-encrypted slurs; texts containing such terms exhibit significantly higher toxicity than conventional hate speech; and their dissemination is deeply embedded within far-right, conspiracy-theory, and anti-immigration political discourses, coalescing around three core themes: “cultural replacement,” “religious threat,” and “loss of immigration policy control.” The study contributes a deployable detection framework for platform content moderation and provides empirical evidence linking ideologically divergent extremist narratives.

Technology Category

Application Category

📝 Abstract
Islamophobia started evolving into a global phenomenon by attracting followers across the globe, particularly in Western societies. Thus, understanding Islamophobia's global spread and online dissemination is crucial. This paper performs a large-scale analysis of specialized, semi-coded Islamophobic terms such as (muzrat, pislam, mudslime, mohammedan, muzzies) floated on extremist social platforms, i.e., 4Chan, Gab, Telegram, etc. First, we use large language models (LLMs) to show their ability to understand these terms. Second, using Google Perspective API, we also find that Islamophobic text is more toxic compared to other kinds of hate speech. Finally, we use BERT topic modeling approach to extract different topics and Islamophobic discourse on these social platforms. Our findings indicate that LLMs understand these Out-Of-Vocabulary (OOV) slurs; however, measures are still required to control such discourse. Our topic modeling also indicates that Islamophobic text is found across various political, conspiratorial, and far-right movements and is particularly directed against Muslim immigrants. Taken altogether, we performed the first study on Islamophobic semi-coded terms and shed a global light on Islamophobia.
Problem

Research questions and friction points this paper is trying to address.

Analyzing Islamophobic discourse using semi-coded terms and LLMs
Assessing toxicity of Islamophobic text compared to other hate speech
Exploring Islamophobia's link to political and far-right movements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using LLMs to understand semi-coded Islamophobic terms
Analyzing toxicity with Google Perspective API
Extracting topics via BERT topic modeling
🔎 Similar Papers
No similar papers found.